Science.gov

Sample records for optimal sensor fusion

  1. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  2. Tier-scalable reconnaissance: the challenge of sensor optimization, sensor deployment, sensor fusion, and sensor interoperability

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; George, Thomas; Tarbell, Mark A.

    2007-04-01

    Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability.

  3. Optimal sensor fusion for land vehicle navigation

    SciTech Connect

    Morrow, J.D.

    1990-10-01

    Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. This paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.

  4. Optimal fusion rule for distributed detection in clustered wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Aldalahmeh, Sami A.; Ghogho, Mounir; McLernon, Des; Nurellari, Edmond

    2016-01-01

    We consider distributed detection in a clustered wireless sensor network (WSN) deployed randomly in a large field for the purpose of intrusion detection. The WSN is modeled by a homogeneous Poisson point process. The sensor nodes (SNs) compute local decisions about the intruder's presence and send them to the cluster heads (CHs). A stochastic geometry framework is employed to derive the optimal cluster-based fusion rule (OCR), which is a weighted average of the local decision sum of each cluster. Interestingly, this structure reduces the effect of false alarm on the detection performance. Moreover, a generalized likelihood ratio test (GLRT) for cluster-based fusion (GCR) is developed to handle the case of unknown intruder's parameters. Simulation results show that the OCR performance is close to the Chair-Varshney rule. In fact, the latter benchmark can be reached by forming more clusters in the network without increasing the SN deployment intensity. Simulation results also show that the GCR performs very closely to the OCR when the number of clusters is large enough. The performance is further improved when the SN deployment intensity is increased.

  5. A weighted optimization approach to time-of-flight sensor fusion.

    PubMed

    Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger

    2014-01-01

    Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture. PMID:24184728

  6. Adaptive sensor fusion using genetic algorithms

    SciTech Connect

    Fitzgerald, D.S.; Adams, D.G.

    1994-08-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ``fuzzy`` sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion.

  7. Analytical performance evaluation for autonomous sensor fusion

    NASA Astrophysics Data System (ADS)

    Chang, K. C.

    2008-04-01

    A distributed data fusion system consists of a network of sensors, each capable of local processing and fusion of sensor data. There has been a great deal of work in developing distributed fusion algorithms applicable to a network centric architecture. Currently there are at least a few approaches including naive fusion, cross-correlation fusion, information graph fusion, maximum a posteriori (MAP) fusion, channel filter fusion, and covariance intersection fusion. However, in general, in a distributed system such as the ad hoc sensor networks, the communication architecture is not fixed. Each node has knowledge of only its local connectivity but not the global network topology. In those cases, the distributed fusion algorithm based on information graph type of approach may not scale due to its requirements to carry long pedigree information for decorrelation. In this paper, we focus on scalable fusion algorithms and conduct analytical performance evaluation to compare their performance. The goal is to understand the performance of those algorithms under different operating conditions. Specifically, we evaluate the performance of channel filter fusion, Chernoff fusion, Shannon Fusion, and Battachayya fusion algorithms. We also compare their results to NaÃve fusion and "optimal" centralized fusion algorithms under a specific communication pattern.

  8. Adaptive sensor fusion

    NASA Astrophysics Data System (ADS)

    Kadar, Ivan

    1995-07-01

    A perceptual reasoning system adaptively extracting, associating, and fusing information from multiple sources, at various levels of abstraction, is considered as the building block for the next generation of surveillance systems. A system architecture is presented which makes use of both centralized and distributed predetection fusion combined with intelligent monitor and control coupling both on-platform and off-board track and decision level fusion results. The goal of this system is to create a `gestalt fused sensor system' whose information product is greater than the sum of the information products from the individual sensors and has performance superior to either individual or a sub-group of combined sensors. The application of this architectural concept to the law enforcement arena (e.g. drug interdiction) utilizing multiple spatially and temporally diverse surveillance platforms and/or information sources, is used to illustrate the benefits of the adaptive perceptual reasoning system concept.

  9. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  10. Improved GSO optimized ESN soft-sensor model of flotation process based on multisource heterogeneous information fusion.

    PubMed

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  11. Feature-level sensor fusion

    NASA Astrophysics Data System (ADS)

    Peli, Tamar; Young, Mon; Knox, Robert; Ellis, Kenneth K.; Bennett, Frederick

    1999-03-01

    This paper describes two practical fusion techniques for automatic target cueing that combine features derived from each sensor data ta the object-level. In the hybrid fusion method each of the input sensor data is prescreened before the fusion stage. The cued fusion method assumes that one of the sensors is designated as a primary sensor, and thus ATC is only applied to its input data. If one of the sensors exhibits a higher Pd and/or a lower false alarm rate, it can be selected as the primary sensor. However, if the ground coverage can be segmented to regions in which one of the sensors is known to exhibit better performance, then the cued fusion can be applied locally/adaptively by switching the choice of a primary sensor. Otherwise, the cued fusion is applied both ways and the outputs of each cued mode are combined. Both fusion approaches use a back-end discrimination stage that is applied to a combined feature vector to reduce false alarms. The two fusion processes were applied to spectral and radar sensor data nd were shown to provide substantial false alarm reduction. The approaches are easily extendable to more than two sensors.

  12. Distributed multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Scheffel, Peter; Fish, Robert; Knobler, Ron; Plummer, Thomas

    2008-03-01

    McQ has developed a broad based capability to fuse information in a geographic area from multiple sensors to build a better understanding of the situation. The paper will discuss the fusion architecture implemented by McQ to use many sensors and share their information. This multi sensor fusion architecture includes data sharing and analysis at the individual sensor, at communications nodes that connect many sensors together, at the system server/user interface, and across multi source information available through networked services. McQ will present a data fusion architecture that integrates a "Feature Information Base" (FIB) with McQ's well known Common Data Interchange Format (CDIF) data structure. The distributed multi sensor fusion provides enhanced situation awareness for the user.

  13. Sensor fusion for airborne landmine detection

    NASA Astrophysics Data System (ADS)

    Schatten, Miranda A.; Gader, Paul D.; Bolton, Jeremy; Zare, Alina; Mendez-Vasquez, Andres

    2006-05-01

    Sensor fusion has become a vital research area for mine detection because of the countermine community's conclusion that no single sensor is capable of detecting mines at the necessary detection and false alarm rates over a wide variety of operating conditions. The U. S. Army Night Vision and Electronic Sensors Directorate (NVESD) evaluates sensors and algorithms for use in a multi-sensor multi-platform airborne detection modality. A large dataset of hyperspectral and radar imagery exists from the four major data collections performed at U. S. Army temperate and arid testing facilities in Autumn 2002, Spring 2003, Summer 2004, and Summer 2005. There are a number of algorithm developers working on single-sensor algorithms in order to optimize feature and classifier selection for that sensor type. However, a given sensor/algorithm system has an absolute limitation based on the physical phenomena that system is capable of sensing. Therefore, we perform decision-level fusion of the outputs from single-channel algorithms and we choose to combine systems whose information is complementary across operating conditions. That way, the final fused system will be robust to a variety of conditions, which is a critical property of a countermine detection system. In this paper, we present the analysis of fusion algorithms on data from a sensor suite consisting of high frequency radar imagery combined with hyperspectral long-wave infrared sensor imagery. The main type of fusion being considered is Choquet integral fusion. We evaluate performance achieved using the Choquet integral method for sensor fusion versus Boolean and soft "and," "or," mean, or majority voting.

  14. Sensor fusion for synthetic vision

    NASA Technical Reports Server (NTRS)

    Pavel, M.; Larimer, J.; Ahumada, A.

    1991-01-01

    Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.

  15. Passive-sensor data fusion

    NASA Astrophysics Data System (ADS)

    Kolitz, Stephan E.

    1991-08-01

    Problems in multi-sensor data fusion are addressed for passive (angle-only) sensors; the example used is a constellation of IR sensors on satellites in low-earth orbit, viewing up to several hundred ballistic missile targets. The sensor data used in the methodology of the report is 'post-detection,' with targets resolved on single pixels (it is possible for several targets to be resolved on the same pixel). A 'scan' by a sensor is modeled by the formation of a rectangular focal plane image of lit pixels (bits with value 1), representing the presence of at least one target, and unlit pixels (bits with value 0), representing the absence of a target, at a particular time. Approaches and algorithmic solutions are developed which address the following passive sensor data fusion problems: scan-to-scan target association, and association classification. The ultimate objective is to estimate target states, for use in a larger battle management system. Results indicate that successful scan-to-scan target association is feasible at scan rates >=2 Hz, independent of resolution. Sensor-to-sensor target association is difficult at low resolution; even with high-resolution sensors the performance of a standard two-sensor single scan approach is variable and unpredictable, since it is a function of the relative geometry of sensors and targets. A single-scan approach using the Varad algorithm and three sensors is not as sensitive to this relative geometry, but is usable only for high-resolution sensors. Innovative multi-scan and multi-sensor modifications of the three- sensor Varad algorithm are developed which provide excellent performance for a wide range of sensor resolutions. The multi-sensor multi-scan methodology also provides accurate information on the classification of target associations as correct or incorrect. For the scenarios examined with resolution cell sizes ranging from 300 m to 2 km, association errors are less than 5% and essentially no classification errors

  16. Imaging sensor fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Currie, Nicholas C.; Demma, Fred J.; Ferris, David D., Jr.; McMillan, Robert W.; Wicks, Michael C.; Zyga, Kathleen

    1997-02-01

    Sensors are needed for concealed weapon detection which perform better with regard to weapon classification, identification, probability of detection and false alarm rate than the magnetic sensors commonly used in airports. We have concluded that no single sensor will meet the requirements for a reliable concealed weapon detector and thus that sensor fusion is required to optimize detection probability and false alarm rate by combining sensor outputs in a synergistic fashion. This paper describes microwave, millimeter wave, far infrared, infrared, x-ray, acoustic, and magnetic sensors which have some promise in the field of concealed weapon detection. The strengths and weaknesses of these devices are discussed, and examples of the outputs of most of them are given. Various approaches to fusion of these sensors are also described, from simple cuing of one sensor by another to improvement of image quality by using multiple systems. It is further concluded that none of the sensors described herein will ever replace entirely the airport metal detector, but that many of them meet needs imposed by applications requiring a higher detection probability and lower false alarm rate.

  17. Multiple objective optimization for active sensor management

    NASA Astrophysics Data System (ADS)

    Page, Scott F.; Dolia, Alexander N.; Harris, Chris J.; White, Neil M.

    2005-03-01

    The performance of a multi-sensor data fusion system is inherently constrained by the configuration of the given sensor suite. Intelligent or adaptive control of sensor resources has been shown to offer improved fusion performance in many applications. Common approaches to sensor management select sensor observation tasks that are optimal in terms of a measure of information. However, optimising for information alone is inherently sub-optimal as it does not take account of any other system requirements such as stealth or sensor power conservation. We discuss the issues relating to developing a suite of performance metrics for optimising multi-sensor systems and propose some candidate metrics. In addition it may not always be necessary to maximize information gain, in some cases small increases in information gain may take place at the cost of large sensor resource requirements. Additionally, the problems of sensor tasking and placement are usually treated separately, leading to a lack of coherency between sensor management frameworks. We propose a novel approach based on a high level decentralized information-theoretic sensor management architecture that unifies the processes of sensor tasking and sensor placement into a single framework. Sensors are controlled using a minimax multiple objective optimisation approach in order to address probability of target detection, sensor power consumption, and sensor survivability whilst maintaining a target estimation covariance threshold. We demonstrate the potential of the approach through simulation of a multi-sensor, target tracking scenario and compare the results with a single objective information based approach.

  18. Real-time sensor validation and fusion for distributed autonomous sensors

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.

    2004-04-01

    Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.

  19. Sensor fusion with application to electronic warfare

    NASA Astrophysics Data System (ADS)

    Zanzalari, Robert M.; Van Alstine, Edward

    1999-03-01

    The Night Vision and Electronics Sensors Directorate, Survivability/Camouflage, Concealment and Deception Division mission is to provide affordable aircraft and ground electronic sensor/systems and signature management technologies which enhance survivability and lethality of US and International Forces. Since 1992, efforts have been undertaken in the area of Situational Awareness and Dominant Battlespace Knowledge. These include the Radar Deception and Jamming Advanced Technology Demonstration (ATD), Survivability and Targeting System Integration, Integrated Situation Awareness and Targeting ATD, Combat Identification, Ground Vehicle Situational Awareness, and Combined Electronic Intelligence Target Correlation. This paper will address the Situational Awareness process as it relates to the integration of Electronic Warfare (EW) with targeting and intelligence and information warfare systems. Discussion will be presented on the Sensor Fusion, Situation Assessment and Response Management Strategies. Sensor Fusion includes the association, correlation, and combination of data and information from single and multiple sources to achieve refined position and identity estimates, and complete and timely assessments of situations and threats as well as their significance. Situation Assessment includes the process of interpreting and expressing the environmnet based on situation abstract products and information from technical and doctrinal data bases. Finally, Response Management provides the centralized, adaptable control of all renewable and expendable countermeasure assets resulting in optimization of the response to the threat environment.

  20. Sensor fusion for intelligent alarm analysis

    SciTech Connect

    Nelson, C.L.; Fitzgerald, D.S.

    1995-03-01

    The purpose of an intelligent alarm analysis system is to provide complete and manageable information to a central alarm station operator by applying alarm processing and fusion techniques to sensor information. This paper discusses the sensor fusion approach taken to perform intelligent alarm analysis for the Advanced Exterior Sensor (AES). The AES is an intrusion detection and assessment system designed for wide-area coverage, quick deployment, low false/nuisance alarm operation, and immediate visual assessment. It combines three sensor technologies (visible, infrared, and millimeter wave radar) collocated on a compact and portable remote sensor module. The remote sensor module rotates at a rate of 1 revolution per second to detect and track motion and provide assessment in a continuous 360` field-of-regard. Sensor fusion techniques are used to correlate and integrate the track data from these three sensors into a single track for operator observation. Additional inputs to the fusion process include environmental data, knowledge of sensor performance under certain weather conditions, sensor priority, and recent operator feedback. A confidence value is assigned to the track as a result of the fusion process. This helps to reduce nuisance alarms and to increase operator confidence in the system while reducing the workload of the operator.

  1. Multisensor optimal information fusion input white noise deconvolution estimators.

    PubMed

    Sun, Shuli

    2004-08-01

    The unified multisensor optimal information fusion criterion weighted by matrices is rederived in the linear minimum variance sense, where the assumption of normal distribution is avoided. Based on this fusion criterion, the optimal information fusion input white noise deconvolution estimators are presented for discrete time-varying linear stochastic control system with multiple sensors and correlated noises, which can be applied to seismic data processing in oil exploration. A three-layer fusion structure with fault tolerant property and reliability is given. The first fusion layer and the second fusion layer both have netted parallel structures to determine the first-step prediction error cross-covariance for the state and the estimation error cross-covariance for the input white noise between any two sensors at each time step, respectively. The third fusion layer is the fusion center to determine the optimal matrix weights and obtain the optimal fusion input white noise estimators. The simulation results for Bernoulli-Gaussian input white noise deconvolution estimators show the effectiveness. PMID:15462453

  2. An objective multi-sensor fusion metric for target detection

    NASA Astrophysics Data System (ADS)

    Sweetnich, S. R.; Fernandes, S. P.; Clark, J. D.; Sakla, W. A.

    2014-06-01

    Target detection is limited based on a specific sensors capability; however, the combination of multiple sensors will improve the confidence of target detection. Confidence of detection, tracking and identifying a target in a multi-sensor environment depends on intrinsic and extrinsic sensor qualities, e.g. target geo-location registration, and environmental conditions 1. Determination of the optimal sensors and classification algorithms, required to assist in specific target detection, has largely been accomplished with empirical experimentation. Formulation of a multi-sensor effectiveness metric (MuSEM) for sensor combinations is presented in this paper. Leveraging one or a combination of sensors should provide a higher confidence of target classification. This metric incorporates the Dempster-Shafer Theory for decision analysis. MuSEM is defined for weakly labeled multimodal data and is modeled and trained with empirical fused sensor detections; this metric is compared to Boolean algebra algorithms from decision fusion research. Multiple sensor specific classifiers are compared and fused to characterize sensor detection models and the likelihood functions of the models. For area under the curve (AUC), MuSEM attained values as high as .97 with an average difference of 5.33% between Boolean fusion rules. Data was collected from the Air Force Research Lab's Minor Area Motion Imagery (MAMI) project. This metric is efficient and effective, providing a confidence of target classification based on sensor combinations.

  3. Enhanced chemical weapon warning via sensor fusion

    NASA Astrophysics Data System (ADS)

    Flaherty, Michael; Pritchett, Daniel; Cothren, Brian; Schwaiger, James

    2011-05-01

    Torch Technologies Inc., is actively involved in chemical sensor networking and data fusion via multi-year efforts with Dugway Proving Ground (DPG) and the Defense Threat Reduction Agency (DTRA). The objective of these efforts is to develop innovative concepts and advanced algorithms that enhance our national Chemical Warfare (CW) test and warning capabilities via the fusion of traditional and non-traditional CW sensor data. Under Phase I, II, and III Small Business Innovative Research (SBIR) contracts with DPG, Torch developed the Advanced Chemical Release Evaluation System (ACRES) software to support non real-time CW sensor data fusion. Under Phase I and II SBIRs with DTRA in conjunction with the Edgewood Chemical Biological Center (ECBC), Torch is using the DPG ACRES CW sensor data fuser as a framework from which to develop the Cloud state Estimation in a Networked Sensor Environment (CENSE) data fusion system. Torch is currently developing CENSE to implement and test innovative real-time sensor network based data fusion concepts using CW and non-CW ancillary sensor data to improve CW warning and detection in tactical scenarios.

  4. Sensor fusion for mobile robot navigation

    SciTech Connect

    Kam, M.; Zhu, X.; Kalata, P.

    1997-01-01

    The authors review techniques for sensor fusion in robot navigation, emphasizing algorithms for self-location. These find use when the sensor suite of a mobile robot comprises several different sensors, some complementary and some redundant. Integrating the sensor readings, the robot seeks to accomplish tasks such as constructing a map of its environment, locating itself in that map, and recognizing objects that should be avoided or sought. The review describes integration techniques in two categories: low-level fusion is used for direct integration of sensory data, resulting in parameter and state estimates; high-level fusion is used for indirect integration of sensory data in hierarchical architectures, through command arbitration and integration of control signals suggested by different modules. The review provides an arsenal of tools for addressing this (rather ill-posed) problem in machine intelligence, including Kalman filtering, rule-based techniques, behavior based algorithms and approaches that borrow from information theory, Dempster-Shafer reasoning, fuzzy logic and neural networks. It points to several further-research needs, including: robustness of decision rules; simultaneous consideration of self-location, motion planning, motion control and vehicle dynamics; the effect of sensor placement and attention focusing on sensor fusion; and adaptation of techniques from biological sensor fusion.

  5. Evaluation of taste solutions by sensor fusion

    SciTech Connect

    Kojima, Yohichiro; Sato, Eriko; Atobe, Masahiko; Nakashima, Miki; Kato, Yukihisa; Nonoue, Koichi; Yamano, Yoshimasa

    2009-05-23

    In our previous studies, properties of taste solutions were discriminated based on sound velocity and amplitude of ultrasonic waves propagating through the solutions. However, to make this method applicable to beverages which contain many taste substances, further studies are required. In this study, the waveform of an ultrasonic wave with frequency of approximately 5 MHz propagating through a solution was measured and subjected to frequency analysis. Further, taste sensors require various techniques of sensor fusion to effectively obtain chemical and physical parameter of taste solutions. A sensor fusion method of ultrasonic wave sensor and various sensors, such as the surface plasmon resonance (SPR) sensor, to estimate tastes were proposed and examined in this report. As a result, differences among pure water and two basic taste solutions were clearly observed as differences in their properties. Furthermore, a self-organizing neural network was applied to obtained data which were used to clarify the differences among solutions.

  6. New method for sensor data fusion in machine vision

    NASA Astrophysics Data System (ADS)

    Wang, Yuan-Fang

    1991-09-01

    In this paper, we propose a new scheme for sensor data fusion in machine vision. The proposed scheme uses Kalman filter as the sensor data integration tool and hierarchical B- spline surface as the recording data structure. Kalman filter is used to obtain statistically optimal estimations of the imaged surface structure based on external sensor measurements. Hierarchical B-spline surface maintains high-order surface derivative continuity, may be adaptively refined, possesses desirable local control property, and is storage efficient. Hence, it is used to record the reconstructed surface structure.

  7. Minimum energy information fusion in sensor networks

    SciTech Connect

    Chapline, G

    1999-05-11

    In this paper we consider how to organize the sharing of information in a distributed network of sensors and data processors so as to provide explanations for sensor readings with minimal expenditure of energy. We point out that the Minimum Description Length principle provides an approach to information fusion that is more naturally suited to energy minimization than traditional Bayesian approaches. In addition we show that for networks consisting of a large number of identical sensors Kohonen self-organization provides an exact solution to the problem of combing the sensor outputs into minimal description length explanations.

  8. Health-Enabled Smart Sensor Fusion Technology

    NASA Technical Reports Server (NTRS)

    Wang, Ray

    2012-01-01

    A process was designed to fuse data from multiple sensors in order to make a more accurate estimation of the environment and overall health in an intelligent rocket test facility (IRTF), to provide reliable, high-confidence measurements for a variety of propulsion test articles. The object of the technology is to provide sensor fusion based on a distributed architecture. Specifically, the fusion technology is intended to succeed in providing health condition monitoring capability at the intelligent transceiver, such as RF signal strength, battery reading, computing resource monitoring, and sensor data reading. The technology also provides analytic and diagnostic intelligence at the intelligent transceiver, enhancing the IEEE 1451.x-based standard for sensor data management and distributions, as well as providing appropriate communications protocols to enable complex interactions to support timely and high-quality flow of information among the system elements.

  9. Sensor fusion method for machine performance enhancement

    SciTech Connect

    Mou, J.I.; King, C.; Hillaire, R.; Jones, S.; Furness, R.

    1998-03-01

    A sensor fusion methodology was developed to uniquely integrate pre-process, process-intermittent, and post-process measurement and analysis technology to cost-effectively enhance the accuracy and capability of computer-controlled manufacturing equipment. Empirical models and computational algorithms were also developed to model, assess, and then enhance the machine performance.

  10. Evaluating fusion techniques for multi-sensor satellite image data

    SciTech Connect

    Martin, Benjamin W; Vatsavai, Raju

    2013-01-01

    Satellite image data fusion is a topic of interest in many areas including environmental monitoring, emergency response, and defense. Typically any single satellite sensor cannot provide all of the benefits offered by a combination of different sensors (e.g., high-spatial but low spectral resolution vs. low-spatial but high spectral, optical vs. SAR). Given the respective strengths and weaknesses of the different types of image data, it is beneficial to fuse many types of image data to extract as much information as possible from the data. Our work focuses on the fusion of multi-sensor image data into a unified representation that incorporates the potential strengths of a sensor in order to minimize classification error. Of particular interest is the fusion of optical and synthetic aperture radar (SAR) images into a single, multispectral image of the best possible spatial resolution. We explore various methods to optimally fuse these images and evaluate the quality of the image fusion by using K-means clustering to categorize regions in the fused images and comparing the accuracies of the resulting categorization maps.

  11. Algorithms for distributed chemical sensor fusion

    NASA Astrophysics Data System (ADS)

    Lundberg, Scott; Paffenroth, Randy; Yosinski, Jason

    2010-04-01

    The fusion of Chemical, Biological, Radiological, and Nuclear (CBRN) sensor readings from both point and stand-off sensors requires a common space in which to perform estimation. In this paper we suggest a common representational space that allows us to properly assimilate measurements from a variety of different sources while still maintaining the ability to correctly model the structure of CBRN clouds. We design this space with sparse measurement data in mind in such a way that we can estimate not only the location of the cloud but also our uncertainty in that estimate. We contend that a treatment of the uncertainty of an estimate is essential in order to derive actionable information from any sensor system; especially for systems designed to operate with minimal sensor data. A companion paper1 further extends and evaluates the uncertainty management introduced here for assimilating sensor measurements into a common representational space.

  12. Sensor fusion for precision agriculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Information-based management of crop production systems known as precision agriculture relies on different sensor technologies aimed at characterization of spatial heterogeneity of a cropping environment. Remote and proximal sensing systems have been deployed to obtain high-resolution data pertainin...

  13. Sensor fusion for improved indoor navigation

    NASA Astrophysics Data System (ADS)

    Emilsson, Erika; Rydell, Joakim

    2012-09-01

    A reliable indoor positioning system providing high accuracy has the potential to increase the safety of first responders and military personnel significantly. To enable navigation in a broad range of environments and obtain more accurate and robust positioning results, we propose a multi-sensor fusion approach. We describe and evaluate a positioning system, based on sensor fusion between a foot-mounted inertial measurement unit (IMU) and a camera-based system for simultaneous localization and mapping (SLAM). The complete system provides accurate navigation in many relevant environments without depending on preinstalled infrastructure. The camera-based system uses both inertial measurements and visual data, thereby enabling navigation also in environments and scenarios where one of the sensors provides unreliable data during a few seconds. When sufficient light is available, the camera-based system generally provides good performance. The foot-mounted system provides accurate positioning when distinct steps can be detected, e.g., during walking and running, even in dark or smoke-filled environments. By combining the two systems, the integrated positioning system can be expected to enable accurate navigation in almost all kinds of environments and scenarios. In this paper we present results from initial tests, which show that the proposed sensor fusion improves the navigation solution considerably in scenarios where either the foot-mounted or camera-based system is unable to navigate on its own.

  14. Exploiting phase transitions for fusion optimization problems

    NASA Astrophysics Data System (ADS)

    Svenson, Pontus

    2005-05-01

    Many optimization problems that arise in multi-target tracking and fusion applications are known to be NP-complete, ie, believed to have worst-case complexities that are exponential in problem size. Recently, many such NP-complete problems have been shown to display threshold phenomena: it is possible to define a parameter such that the probability of a random problem instance having a solution jumps from 1 to 0 at a specific value of the parameter. It is also found that the amount of resources needed to solve the problem instance peaks at the transition point. Among the problems found to display this behavior are graph coloring (aka clustering, relevant for multi-target tracking), satisfiability (which occurs in resource allocation and planning problem), and the travelling salesperson problem. Physicists studying these problems have found intriguing similarities to phase transitions in spin models of statistical mechanics. Many methods previously used to analyze spin glasses have been used to explain some of the properties of the behavior at the transition point. It turns out that the transition happens because the fitness landscape of the problem changes as the parameter is varied. Some algorithms have been introduced that exploit this knowledge of the structure of the fitness landscape. In this paper, we review some of the experimental and theoretical work on threshold phenomena in optimization problems and indicate how optimization problems from tracking and sensor resource allocation could be analyzed using these results.

  15. Robot navigation using simple sensor fusion

    SciTech Connect

    Jollay, D.M.; Ricks, R.E.

    1988-01-01

    Sensors on an autonomous mobile system are essential in enviornment determination for navigation purposes. As is well documented in previous publications, sonar sensors are inadequate in providing a depiction of a real world environment and therefore do not provide accurate information for navigation, it not used in conjunction with another type of sensor. This paper describes a simple, inexpensive, and relatively fast navigation algorithm involving vision and sonar sensor fusion for use in navigating an autonomous robot in an unknown and potentially dynamic environment. Navigation of the mobile robot was accomplished by use of a TV camera as the primary sensor. Input data received from the camera were digitized through a video module and then processed using a dedicated vision system to enable detection of obstacles and to determine edge positions relative to the robot. Since 3D vision was not attempted due to its complex and time consuming nature, sonar sensors were then sued as secondary sensors in order to determine the proximity of detected obstacles. By then fusing the sensor data, the robot was able to navigate (quickly and collision free) to a given goal, achieving obstacle avoidance in real-time.

  16. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  17. Sensor fusion for antipersonnel landmine detection: a case study

    NASA Astrophysics Data System (ADS)

    den Breejen, Eric; Schutte, Klamer; Cremer, Frank

    1999-08-01

    In this paper the multi sensor fusion results obtained within the European research project GEODE are presented. The layout of the test lane and the individual sensors used are described. The implementation of the SCOOP algorithm improves the ROC curves, as the false alarm surface and the number of false alarms both are taken into account. The confidence grids, as produced by the sensor manufacturers, of the sensors are used as input for the different sensor fusion methods implemented. The multisensor fusion methods implemented are Bayes, Dempster-Shafer, fuzzy probabilities and rules. The mapping of the confidence grids to the input parameters for fusion methods is an important step. Due to limited amount of the available data the entire test lane is used for training and evaluation. All four sensor fusion methods provide better detection results than the individual sensors.

  18. Statistical modeling and data fusion of automotive sensors for object detection applications in a driving environment

    NASA Astrophysics Data System (ADS)

    Hurtado, Miguel A.

    In this work, we consider the application of classical statistical inference to the fusion of data from different sensing technologies for object detection applications in order to increase the overall performance for a given active safety automotive system. Research evolved mainly around a centralized sensor fusion architecture assuming that three non-identical sensors, modeled by corresponding probability density functions (pdfs), provide discrete information of target being present or absent with associated probabilities of detection and false alarm for the sensor fusion engine. The underlying sensing technologies are the following standard automotive sensors: 24.5 GHz radar, high dynamic range infrared camera and a laser-radar. A complete mathematical framework was developed to select the optimal decision rule based on a generalized multinomial distribution resulting from a sum of weighted Bernoulli random variables from the Neyman-Pearson lemma and the likelihood ratio test. Moreover, to better understand the model and to obtain upper bounds on the performance of the fusion rules, we assumed exponential pdfs for each sensor and a parallel mathematical expression was obtained based on a generalized gamma distribution resulting from a sum of weighted exponential random variables for the situation when the continuous random vector of information is available. Mathematical expressions and results were obtained for modeling the following case scenarios: (i) non-identical sensors, (ii) identical sensors, (iii) combination of nonidentical and identical sensors, (iv) faulty sensor operation, (v) dominant sensor operation, (vi) negative sensor operation, and (vii) distributed sensor fusion. The second and final part of this research focused on: (a) simulation of statistical models for each sensing technology, (b) comparisons with distributed fusion, (c) overview of dynamic sensor fusion and adaptive decision rules.

  19. Method and apparatus for sensor fusion

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)

    1991-01-01

    Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.

  20. Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses

    NASA Astrophysics Data System (ADS)

    Shahbazian, Elisa

    1995-09-01

    Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.

  1. Fluorescent sensors based on bacterial fusion proteins

    NASA Astrophysics Data System (ADS)

    Prats Mateu, Batirtze; Kainz, Birgit; Pum, Dietmar; Sleytr, Uwe B.; Toca-Herrera, José L.

    2014-06-01

    Fluorescence proteins are widely used as markers for biomedical and technological purposes. Therefore, the aim of this project was to create a fluorescent sensor, based in the green and cyan fluorescent protein, using bacterial S-layers proteins as scaffold for the fluorescent tag. We report the cloning, expression and purification of three S-layer fluorescent proteins: SgsE-EGFP, SgsE-ECFP and SgsE-13aa-ECFP, this last containing a 13-amino acid rigid linker. The pH dependence of the fluorescence intensity of the S-layer fusion proteins, monitored by fluorescence spectroscopy, showed that the ECFP tag was more stable than EGFP. Furthermore, the fluorescent fusion proteins were reassembled on silica particles modified with cationic and anionic polyelectrolytes. Zeta potential measurements confirmed the particle coatings and indicated their colloidal stability. Flow cytometry and fluorescence microscopy showed that the fluorescence of the fusion proteins was pH dependent and sensitive to the underlying polyelectrolyte coating. This might suggest that the fluorescent tag is not completely exposed to the bulk media as an independent moiety. Finally, it was found out that viscosity enhanced the fluorescence intensity of the three fluorescent S-layer proteins.

  2. Sensor fusion for intelligent behavior on small unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Ahuja, G.; Sights, B.; Pacis, E. B.; Everett, H. R.

    2007-04-01

    Sensors commonly mounted on small unmanned ground vehicles (UGVs) include visible light and thermal cameras, scanning LIDAR, and ranging sonar. Sensor data from these sensors is vital to emerging autonomous robotic behaviors. However, sensor data from any given sensor can become noisy or erroneous under a range of conditions, reducing the reliability of autonomous operations. We seek to increase this reliability through data fusion. Data fusion includes characterizing the strengths and weaknesses of each sensor modality and combining their data in a way such that the result of the data fusion provides more accurate data than any single sensor. We describe data fusion efforts applied to two autonomous behaviors: leader-follower and human presence detection. The behaviors are implemented and tested in a variety of realistic conditions.

  3. Cognitive foundations for model-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Weijers, Bertus; Mutz, Chris W.

    2003-08-01

    Target detection, tracking, and sensor fusion are complicated problems, which usually are performed sequentially. First detecting targets, then tracking, then fusing multiple sensors reduces computations. This procedure however is inapplicable to difficult targets which cannot be reliably detected using individual sensors, on individual scans or frames. In such more complicated cases one has to perform functions of fusing, tracking, and detecting concurrently. This often has led to prohibitive combinatorial complexity and, as a consequence, to sub-optimal performance as compared to the information-theoretic content of all the available data. It is well appreciated that in this task the human mind is by far superior qualitatively to existing mathematical methods of sensor fusion, however, the human mind is limited in the amount of information and speed of computation it can cope with. Therefore, research efforts have been devoted toward incorporating "biological lessons" into smart algorithms, yet success has been limited. Why is this so, and how to overcome existing limitations? The fundamental reasons for current limitations are analyzed and a potentially breakthrough research and development effort is outlined. We utilize the way our mind combines emotions and concepts in the thinking process and present the mathematical approach to accomplishing this in the current technology computers. The presentation will summarize the difficulties encountered by intelligent systems over the last 50 years related to combinatorial complexity, analyze the fundamental limitations of existing algorithms and neural networks, and relate it to the type of logic underlying the computational structure: formal, multivalued, and fuzzy logic. A new concept of dynamic logic will be introduced along with algorithms capable of pulling together all the available information from multiple sources. This new mathematical technique, like our brain, combines conceptual understanding with

  4. Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu

    2015-01-01

    Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908

  5. Sensor fusion for intelligent process control.

    SciTech Connect

    Connors, John J.; Hill, Kevin; Hanekamp, David; Haley, William F.; Gallagher, Robert J.; Gowin, Craig; Farrar, Arthur R.; Sheaffer, Donald A.; DeYoung, Mark A.; Bertram, Lee A.; Dodge, Craig; Binion, Bruce; Walsh, Peter M.; Houf, William G.; Desam, Padmabhushana R.; Tiwary, Rajiv; Stokes, Michael R.; Miller, Alan J.; Michael, Richard W.; Mayer, Raymond M.; Jiao, Yu; Smith, Philip J.; Arbab, Mehran; Hillaire, Robert G.

    2004-08-01

    An integrated system for the fusion of product and process sensors and controls for production of flat glass was envisioned, having as its objective the maximization of throughput and product quality subject to emission limits, furnace refractory wear, and other constraints. Although the project was prematurely terminated, stopping the work short of its goal, the tasks that were completed show the value of the approach and objectives. Though the demonstration was to have been done on a flat glass production line, the approach is applicable to control of production in the other sectors of the glass industry. Furthermore, the system architecture is also applicable in other industries utilizing processes in which product uniformity is determined by ability to control feed composition, mixing, heating and cooling, chemical reactions, and physical processes such as distillation, crystallization, drying, etc. The first phase of the project, with Visteon Automotive Systems as industrial partner, was focused on simulation and control of the glass annealing lehr. That work produced the analysis and computer code that provide the foundation for model-based control of annealing lehrs during steady state operation and through color and thickness changes. In the second phase of the work, with PPG Industries as the industrial partner, the emphasis was on control of temperature and combustion stoichiometry in the melting furnace, to provide a wider operating window, improve product yield, and increase energy efficiency. A program of experiments with the furnace, CFD modeling and simulation, flow measurements, and sensor fusion was undertaken to provide the experimental and theoretical basis for an integrated, model-based control system utilizing the new infrastructure installed at the demonstration site for the purpose. In spite of the fact that the project was terminated during the first year of the second phase of the work, the results of these first steps toward implementation

  6. Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    PubMed Central

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme. PMID:22408479

  7. Principles of data-fusion in multi-sensor systems for non-destructive testing

    NASA Astrophysics Data System (ADS)

    Chioclea, Shmuel; Dickstein, Phineas

    2000-05-01

    In recent years, there has been progress in the application of measurement and control systems that engage multi-sensor arrays. Several algorithms and techniques have been developed for the integration of the information obtained from the sensors. The fusion of the data may be complicated due to the fact that each sensor has its own performance characteristics, and because different sensors may detect different physical phenomena. As a result, data fusion turns out to be a multidisciplinary field, which applies principles adopted from other fields such as signal processing, artificial intelligence, statistics, and The Theory of Information. The data fusion machine tries to imitate the human brain, in combining data from numerous sensors and making optimal inferences about the environment. The present paper provides a critical review of data fusion algorithms and techniques and a trenchant summary of the experience gained to date from the several preliminary NDT studies which have been applying multi-sensor data fusion systems. Consequently, this paper provides a list of rules and criteria to be followed in future applications of data fusion to nondestructive testing.

  8. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  9. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  10. ELIPS: Toward a Sensor Fusion Processor on a Chip

    NASA Technical Reports Server (NTRS)

    Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James

    1998-01-01

    The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.

  11. Facility Monitoring: A Qualitative Theory for Sensor Fusion

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2001-01-01

    Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.

  12. Improved UXO detection via sensor fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Li, Jing; Carin, Lawrence; Collins, Leslie M.

    2000-08-01

    Traditional algorithms for UXO remediation experience severe difficulties distinguishing buried targets from anthropic clutter, and in most cases UXO items are found among extensive surface clutter and shrapnel from ordnance operations. These problems render site mediation a very slow, labor intensive, and efficient process. While sensors have improved significantly over the past several years in their ability to detect conducting and/or permeable targets, reduction of the false alarm rate has proven to be a significantly more challenging problem. Our work has focused on the development of optimal signal processing algorithms that rigorously incorporate the underlying physics characteristics of the sensor and the anticipated UXO target in order to address the false alarm issue. In this paper, we describe several techniques for discriminating targets from clutter that have been applied to data obtained with the Multi-sensor Towed Array Detection System (MTADS) that has been developed by the Naval Research Laboratory. MTADS includes both EMI and magnetometer sensors. We describe a variety of signal processing techniques which incorporate physics-based models that have been applied to the data measured by MTADS during field demonstrations. We will compare and contrast the performance of the various algorithms as well as discussing tradeoffs, such as training requirements. The result of this analysis quantify the utility of fusing magnetometer and EMI dat. For example, the JPG-IV test, at the False Positive level obtained by NRL, one of our algorithms achieved a 13 percent improvement in True Positive level over the algorithm traditionally used for processing MTADS data.

  13. Driver drowsiness detection using multimodal sensor fusion

    NASA Astrophysics Data System (ADS)

    Andreeva, Elena O.; Aarabi, Parham; Philiastides, Marios G.; Mohajer, Keyvan; Emami, Majid

    2004-04-01

    This paper proposes a multi-modal sensor fusion algorithm for the estimation of driver drowsiness. Driver sleepiness is believed to be responsible for more than 30% of passenger car accidents and for 4% of all accident fatalities. In commercial vehicles, drowsiness is blamed for 58% of single truck accidents and 31% of commercial truck driver fatalities. This work proposes an innovative automatic sleep-onset detection system. Using multiple sensors, the driver"s body is studied as a mechanical structure of springs and dampeners. The sleep-detection system consists of highly sensitive triple-axial accelerometers to monitor the driver"s upper body in 3-D. The subject is modeled as a linear time-variant (LTV) system. An LMS adaptive filter estimation algorithm generates the transfer function (i.e. weight coefficients) for this LTV system. Separate coefficients are generated for the awake and asleep states of the subject. These coefficients are then used to train a neural network. Once trained, the neural network classifies the condition of the driver as either awake or asleep. The system has been tested on a total of 8 subjects. The tests were conducted on sleep-deprived individuals for the sleep state and on fully awake individuals for the awake state. When trained and tested on the same subject, the system detected sleep and awake states of the driver with a success rate of 95%. When the system was trained on three subjects and then retested on a fourth "unseen" subject, the classification rate dropped to 90%. Furthermore, it was attempted to correlate driver posture and sleepiness by observing how car vibrations propagate through a person"s body. Eight additional subjects were studied for this purpose. The results obtained in this experiment proved inconclusive which was attributed to significant differences in the individual habitual postures.

  14. Optimality of Rate Balancing in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Tarighati, Alla; Jalden, Joakim

    2016-07-01

    We consider the problem of distributed binary hypothesis testing in a parallel network topology where sensors independently observe some phenomenon and send a finite rate summary of their observations to a fusion center for the final decision. We explicitly consider a scenario under which (integer) rate messages are sent over an error free multiple access channel, modeled by a sum rate constraint at the fusion center. This problem was previously studied by Chamberland and Veeravalli, who provided sufficient conditions for the optimality of one bit sensor messages. Their result is however crucially dependent on the feasibility of having as many one bit sensors as the (integer) sum rate constraint of the multiple access channel, an assumption that can often not be satisfied in practice. This prompts us to consider the case of an a-priori limited number of sensors and we provide sufficient condition under which having no two sensors with rate difference more than one bit, so called rate balancing, is an optimal strategy with respect to the Bhattacharyya distance between the hypotheses at the input to the fusion center. We further discuss explicit observation models under which these sufficient conditions are satisfied.

  15. An Alternate View Of Munition Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Mayersak, J. R.

    1988-08-01

    An alternate multimode sensor fusion scheme is treated. The concept is designed to acquire and engage high value relocatable targets in a lock-on-after-launch sequence. The approach uses statistical decision concepts to determine the authority to be assigned to each mode in the acquisition sequence voting and decision process. Statistical target classification and recognition in the engagement sequence is accomplished through variable length feature vectors set by adaptive logics. The approach uses multiple decision for acquisition and classification, in the number of spaces selected, is adaptively weighted and adjusted. The scheme uses type of climate -- arctic, temperate, desert, and equatorial -- diurnal effects --- time of day -- type of background, type of countermeasures present -- signature suppresssion or obscuration, false target decoy or electronic warfare -- and other factors to make these selections. The approach is discussed in simple terms. Voids and deficiencies in the statistical data base used to train such algorithms is discussed. The approach is being developed to engage deep battle targets such as surface-to-surface missile systems, air defense units and self-propelled artillery.

  16. Real-Time Blackboards For Sensor Fusions

    NASA Astrophysics Data System (ADS)

    Johnson, Donald H.; Shaw, Scott W.; Reynolds, Steven; Himayat, Nageen

    1989-09-01

    Multi-sensor fusion, at the most basic level, can be cast into a concise, elegant model. Reality demands, however, that this model be modified and augmented. These modifications often result in software systems that are confusing in function and difficult to debug. This problem can be ameliorated by adopting an object-oriented, data-flow programming style. For real-time applications, this approach simplifies data communications and storage management. The concept of object-oriented, data-flow programming is conveniently embodied in the black-board style of software architecture. Blackboard systems allow diverse programs access to a central data base. When the blackboard is described as an object, it can be distributed over multiple processors for real-time applications. Choosing the appropriate parallel architecture is the subject of ongoing research. A prototype blackboard has been constructed to fuse optical image regions and Doppler radar events. The system maintains tracks of simulated targets in real time. The results of this simulation have been used to direct further research on real-time blackboard systems.

  17. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  18. Linear optimal control of tokamak fusion devices

    SciTech Connect

    Kessel, C.E.; Firestone, M.A.; Conn, R.W.

    1989-05-01

    The control of plasma position, shape and current in a tokamak fusion reactor is examined using linear optimal control. These advanced tokamaks are characterized by non up-down symmetric coils and structure, thick structure surrounding the plasma, eddy currents, shaped plasmas, superconducting coils, vertically unstable plasmas, and hybrid function coils providing ohmic heating, vertical field, radial field, and shaping field. Models of the electromagnetic environment in a tokamak are derived and used to construct control gains that are tested in nonlinear simulations with initial perturbations. The issues of applying linear optimal control to advanced tokamaks are addressed, including complex equilibrium control, choice of cost functional weights, the coil voltage limit, discrete control, and order reduction. Results indicate that the linear optimal control is a feasible technique for controlling advanced tokamaks where the more common classical control will be severely strained or will not work. 28 refs., 13 figs.

  19. Land mine detection through GPR and EMI sensor fusion

    NASA Astrophysics Data System (ADS)

    Weisenseel, Robert A.; Castanon, David A.; Karl, William C.

    1999-12-01

    In this paper, we develop a system to exploit sensor fusion for detecting and locating plastic A/P mines. We design and test the system using data from Monte Carlo electromagnetic induction spectroscopy (EMIS) and ground penetrating radar (GPR) simulations. We include the effects of both random soil surface variability and sensor noise. In the presence of a rough surface and a heterogeneous, multi-element clutter environment, we obtain good results fusing EMIS and GPR data using a statistical approach. More generally, we demonstrate a framework for simulating and testing sensor configurations and sensor fusion approaches for landmine and unexploded ordinance (UXO) detection systems. Taking advantage of high- fidelity electromagnetic simulation, we develop a controlled environment for testing sensor fusion concepts, from varied sensor arrangements to detection algorithms. In this environment, we can examine the effect of changing mine structure, soil parameters, and sensor geometry on the sensor fusion problem. We can then generalize these results to produce mine detectors robust to real-world variations.

  20. Sensor fusion and damage classification in composite materials

    NASA Astrophysics Data System (ADS)

    Zhou, Wenfan; Reynolds, Whitney D.; Moncada, Albert; Kovvali, Narayan; Chattopadhyay, Aditi; Papandreou-Suppappola, Antonia; Cochran, Douglas

    2008-03-01

    We describe a statistical method for the classification of damage in complex structures. Our approach is based on a Bayesian framework using hidden Markov models (HMMs) to model time-frequency features extracted from structural data. We also propose two different methods for sensor fusion to combine information from multiple distributed sensors such that the overall classification performance is increased. The proposed approaches are applied to the classification and localization of delamination in a laminated composite plate. Results using both discrete and continuous observation density HMMs, together with the sensor fusion, are presented and discussed.

  1. IMM filtering on parametric data for multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Shafer, Scott; Owen, Mark W.

    2014-06-01

    In tracking, many types of sensor data can be obtained and utilized to distinguish a particular target. Commonly, kinematic information is used for tracking, but this can be combined with identification attributes and parametric information passively collected from the targets emitters. Along with the standard tracking process (predict, associate, score, update, and initiate) that operates in all kinematic trackers, parametric data can also be utilized to perform these steps and provide a means for feature fusion. Feature fusion, utilizing parametrics from multiple sources, yields a rich data set providing many degrees of freedom to separate and correlate data into appropriate tracks. Parametric radar data can take on many dynamics to include: stable, agile, jitter, and others. By utilizing a running sample mean and sample variance a good estimate of radar parametrics is achieved. However, when dynamics are involved, a severe lag can occur and a non-optimal estimate is achieved. This estimate can yield incorrect associations in feature space and cause track fragmentation or miscorrelation. In this paper we investigate the accuracy of the interacting multiple model (IMM) filter at estimating the first and second moments of radar parametrics. The algorithm is assessed by Monte Carlo simulation and compared against a running sample mean/variance technique. We find that the IMM approach yields a better result due to its ability to quickly adapt to dynamical systems with the proper model and tuning.

  2. Overcoming adverse weather conditions with a common optical path, multiple sensors, and intelligent image fusion system

    NASA Astrophysics Data System (ADS)

    Ng, Joseph; Piacentino, Michael; Caldwell, Brian

    2008-04-01

    Mission success is highly dependent on the ability to accomplish Surveillance, Situation Awareness, Target Detection and Classification, but is challenging under adverse weather conditions. This paper introduces an engineering prototype to address the image collection challenges using a Common Optical Path, Multiple Sensors and an Intelligent Image Fusion System, and provides illustrations and sample fusion images. Panavision's advanced wide spectrum optical design has permitted a suite of imagers to perform observations through a common optical path with a common field of view, thereby aligning images and facilitating optimized downstream image processing. The adaptable design also supports continuous zoom or Galilean lenses for multiple field of views. The Multiple Sensors include: (1) High-definition imaging sensors that are small, have low power consumption and a wide dynamic range; (2) EMCCD sensors that transition from daylight to starlight, even under poor weather conditions, with sensitivity down to 0.00025 Lux; and (3) SWIR sensors that, with the advancement in InGaAs, are able to generate ultra-high sensitivity images from 1-1.7μm reflective light and can achieve imaging through haze and some types of camouflage. The intelligent fusion of multiple sensors provides high-resolution color information with previously impossible sensitivity and contrast. With the integration of Field-Programmable Gate Arrays (FPGAs) and Application-Specific Integrated Circuits (ASICs), real-time Image Processing and Fusion Algorithms can facilitate mission success in a small, low power package.

  3. Neural networks for sensor fusion of meteorological measurements

    NASA Astrophysics Data System (ADS)

    Yee, Young P.; Measure, Edward M.; Cogan, James L.; Bleiweis, Max

    2001-03-01

    The collection and management of vast quantities of meteorological data, including satellite-based as well as ground- based measurements, is presenting great challenges in the optimal usage of this information. To address these issues, the Army Laboratory has developed neural networks for combining for combining multi-sensor meteorological data for Army battlefield weather forecasting models. As a demonstration of this data fusion methodology, multi-sensor data was taken from the Meteorological Measurement Set Profiler (MMSP-POC) system and from satellites with orbits coinciding with the geographical locations of interest. The MMS Profiler-POC comprises a suite of remote sensing instrumentation and surface measuring devices. Neural network techniques were used to retrieve temperature and wind information from a combination of polar orbiter and/ or geostationary satellite observations and ground-based measurements. Back-propagation neural networks were constructed which use satellite radiances, simulated microwave radiometer measurements, and other ground-based measurements as inputs and produced temperature and wind profiles as outputs. The network was trained with Rawinsonde measurements used as truth-values. The final outcome will be an integrated, merged temperature/wind profile from the surface up to the upper troposphere.

  4. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  5. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  6. Large-scale optimal sensor array management for target tracking

    NASA Astrophysics Data System (ADS)

    Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.

    2004-01-01

    Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.

  7. Large-scale optimal sensor array management for target tracking

    NASA Astrophysics Data System (ADS)

    Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.

    2003-12-01

    Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.

  8. Neural network sensor fusion: Creation of a virtual sensor for cloud-base height estimation

    NASA Astrophysics Data System (ADS)

    Pasika, Hugh Joseph Christopher

    2000-10-01

    Sensor fusion has become a significant area of signal processing research that draws on a variety of tools. Its goals are many, however in this thesis, the creation of a virtual sensor is paramount. In particular, neural networks are used to simulate the output of a LIDAR (LASER. RADAR) that measures cloud-base height. Eye-safe LIDAR is more accurate than the standard tool that would be used for such measurement; the ceilometer. The desire is to make cloud-base height information available at a network of ground-based meteorological stations without actually installing LIDAR sensors. To accomplish this, fifty-seven sensors ranging from multispectral satellite information to standard atmospheric measurements such as temperature and humidity, are fused in what can only be termed as a very complex, nonlinear environment. The result is an accurate prediction of cloud-base height. Thus, a virtual sensor is created. A total of four different learning algorithms were studied; two global and two local. In each case, the very best state-of-the-art learning algorithms have been selected. Local methods investigated are the regularized radial basis function network, and the support vector machine. Global methods include the standard backpropagation with momentum trained multilayer perceptron (used as a benchmark) and the multilayer perceptron trained via the Kalman filter algorithm. While accuracy is the primary concern, computational considerations potentially limit the application of several of the above techniques. Thus, in all cases care was taken to minimize computational cost. For example in the case of the support vector machine, a method of partitioning the problem in order to reduce memory requirements and make the optimization over a large data set feasible was employed and in the Kalman algorithm case, node-decoupling was used to dramatically reduce the number of operations required. Overall, the methods produced somewhat equivalent mean squared errors indicating

  9. Multivariable optimization of fusion reactor blankets

    SciTech Connect

    Meier, W.R.

    1984-04-01

    The optimization problem consists of four key elements: a figure of merit for the reactor, a technique for estimating the neutronic performance of the blanket as a function of the design variables, constraints on the design variables and neutronic performance, and a method for optimizing the figure of merit subject to the constraints. The first reactor concept investigated uses a liquid lithium blanket for breeding tritium and a steel blanket to increase the fusion energy multiplication factor. The capital cost per unit of net electric power produced is minimized subject to constraints on the tritium breeding ratio and radiation damage rate. The optimal design has a 91-cm-thick lithium blanket denatured to 0.1% /sup 6/Li. The second reactor concept investigated uses a BeO neutron multiplier and a LiAlO/sub 2/ breeding blanket. The total blanket thickness is minimized subject to constraints on the tritium breeding ratio, the total neutron leakage, and the heat generation rate in aluminum support tendons. The optimal design consists of a 4.2-cm-thick BeO multiplier and 42-cm-thick LiAlO/sub 2/ breeding blanket enriched to 34% /sup 6/Li.

  10. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    PubMed Central

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M.; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T.J.

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927

  11. Decentralized sensor fusion for Ubiquitous Networking Robotics in Urban Areas.

    PubMed

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T J

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927

  12. Sensor Fusion for a Network of Processes/Systems with Highly Autonomous Sensors

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Yuan, Xiao-Jing

    2000-01-01

    This paper describes a distributed sensor-data-fusion paradigm and theory based on a previously developed theory to model sensors as highly autonomous units. Generic procedures are defined to reason and make decisions at the qualitative level. This facilitates distribution of intelligence ( code and hardware) to the sensor level and peer-to-peer communication among sensors, controllers, and other devices in the system.

  13. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  14. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  15. Sensor and information fusion for improved hostile fire situational awareness

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.; Ludwig, William D.

    2010-04-01

    A research-oriented Army Technology Objective (ATO) named Sensor and Information Fusion for Improved Hostile Fire Situational Awareness uniquely focuses on the underpinning technologies to detect and defeat any hostile threat; before, during, and after its occurrence. This is a joint effort led by the Army Research Laboratory, with the Armaments and the Communications and Electronics Research, Development, and Engineering Centers (CERDEC and ARDEC) partners. It addresses distributed sensor fusion and collaborative situational awareness enhancements, focusing on the underpinning technologies to detect/identify potential hostile shooters prior to firing a shot and to detect/classify/locate the firing point of hostile small arms, mortars, rockets, RPGs, and missiles after the first shot. A field experiment conducted addressed not only diverse modality sensor performance and sensor fusion benefits, but gathered useful data to develop and demonstrate the ad hoc networking and dissemination of relevant data and actionable intelligence. Represented at this field experiment were various sensor platforms such as UGS, soldier-worn, manned ground vehicles, UGVs, UAVs, and helicopters. This ATO continues to evaluate applicable technologies to include retro-reflection, UV, IR, visible, glint, LADAR, radar, acoustic, seismic, E-field, narrow-band emission and image processing techniques to detect the threats with very high confidence. Networked fusion of multi-modal data will reduce false alarms and improve actionable intelligence by distributing grid coordinates, detection report features, and imagery of threats.

  16. A Motion Tracking and Sensor Fusion Module for Medical Simulation.

    PubMed

    Shen, Yunhe; Wu, Fan; Tseng, Kuo-Shih; Ye, Ding; Raymond, John; Konety, Badrinath; Sweet, Robert

    2016-01-01

    Here we introduce a motion tracking or navigation module for medical simulation systems. Our main contribution is a sensor fusion method for proximity or distance sensors integrated with inertial measurement unit (IMU). Since IMU rotation tracking has been widely studied, we focus on the position or trajectory tracking of the instrument moving freely within a given boundary. In our experiments, we have found that this module reliably tracks instrument motion. PMID:27046606

  17. Conflict management based on belief function entropy in sensor fusion.

    PubMed

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Wireless sensor network plays an important role in intelligent navigation. It incorporates a group of sensors to overcome the limitation of single detection system. Dempster-Shafer evidence theory can combine the sensor data of the wireless sensor network by data fusion, which contributes to the improvement of accuracy and reliability of the detection system. However, due to different sources of sensors, there may be conflict among the sensor data under uncertain environment. Thus, this paper proposes a new method combining Deng entropy and evidence distance to address the issue. First, Deng entropy is adopted to measure the uncertain information. Then, evidence distance is applied to measure the conflict degree. The new method can cope with conflict effectually and improve the accuracy and reliability of the detection system. An example is illustrated to show the efficiency of the new method and the result is compared with that of the existing methods. PMID:27330904

  18. Energy-Constrained Optimal Quantization for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Luo, Xiliang; Giannakis, Georgios B.

    2007-12-01

    As low power, low cost, and longevity of transceivers are major requirements in wireless sensor networks, optimizing their design under energy constraints is of paramount importance. To this end, we develop quantizers under strict energy constraints to effect optimal reconstruction at the fusion center. Propagation, modulation, as well as transmitter and receiver structures are jointly accounted for using a binary symmetric channel model. We first optimize quantization for reconstructing a single sensor's measurement, and deriving the optimal number of quantization levels as well as the optimal energy allocation across bits. The constraints take into account not only the transmission energy but also the energy consumed by the transceiver's circuitry. Furthermore, we consider multiple sensors collaborating to estimate a deterministic parameter in noise. Similarly, optimum energy allocation and optimum number of quantization bits are derived and tested with simulated examples. Finally, we study the effect of channel coding on the reconstruction performance under strict energy constraints and jointly optimize the number of quantization levels as well as the number of channel uses.

  19. STAC: a comprehensive sensor fusion model for scene characterization

    NASA Astrophysics Data System (ADS)

    Kira, Zsolt; Wagner, Alan R.; Kennedy, Chris; Zutty, Jason; Tuell, Grady

    2015-05-01

    We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion algorithms for moving target identification (MTI) applications, what is the best way to combine image data having different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe and illustrate the STAC model, and compare it to other existing models.

  20. Multiple image sensor data fusion through artificial neural networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  1. Sensor feature fusion for detecting buried objects

    SciTech Connect

    Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.; Hernandez, J.E.; Buhl, M.R.; Schaich, P.C.; Kane, R.J.; Barth, M.J.; DelGrande, N.K.

    1993-04-01

    Given multiple registered images of the earth`s surface from dual-band sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. The sensor suite currently includes two sensors (5 micron and 10 micron wavelengths) and one ground penetrating radar (GPR) of the wide-band pulsed synthetic aperture type. We use a supervised teaming pattern recognition approach to detect metal and plastic land mines buried in soil. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in a two step process to classify a subimage. Thee first step, referred to as feature selection, determines the features of sub-images which result in the greatest separability among the classes. The second step, image labeling, uses the selected features and the decisions from a pattern classifier to label the regions in the image which are likely to correspond to buried mines. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the sensors add value to the detection system. The most important features from the various sensors are fused using supervised teaming pattern classifiers (including neural networks). We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing feature information from multiple sensor types, including dual-band infrared and ground penetrating radar. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved operational problem of detecting buried land mines from an airborne standoff platform.

  2. Optimization Strategies for Sensor and Actuator Placement

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Kincaid, Rex K.

    1999-01-01

    This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.

  3. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling

    NASA Astrophysics Data System (ADS)

    Röhl, S.; Bodenstedt, S.; Küderle, C.; Suwelack, S.; Kenngott, H.; Müller-Stich, B. P.; Dillmann, R.; Speidel, S.

    2012-02-01

    Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the preoperative model. Haptic information which could complement the visual sensor data is still not established. In addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by intraoperative sensors. We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of force measurements, surface reconstruction and biomechanical modeling.

  4. Multi-sensor management for data fusion in target tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiaokun; Chen, Genshe; Blasch, Erik; Patrick, Jim; Yang, Chun; Kadar, Ivan

    2009-05-01

    Multi-sensor management for data fusion in target tracking concerns issues of sensor assignment and scheduling by managing or coordinating the use of multiple sensor resources. Since a centralized sensor management technique has a crucial limitation in that the failure of the central node would cause whole system failure, a decentralized sensor management (DSM) scheme is increasingly important in modern multi-sensor systems. DSM is afforded in modern systems through increased bandwidth, wireless communication, and enhanced power. However, protocols for system control are needed to management device access. As game theory offers learning models for distributed allocations of surveillance resources and provides mechanisms to handle the uncertainty of surveillance area, we propose an agent-based negotiable game theoretic approach for decentralized sensor management (ANGADS). With the decentralized sensor management scheme, sensor assignment occurs locally, and there is no central node and thus reduces the risk of whole-system failure. Simulation results for a multi-sensor target-tracking scenario demonstrate the applicability of the proposed approach.

  5. Sensor and information fusion for improved hostile threat situational awareness

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.; Ludwig, William D.

    2011-06-01

    The U.S. Army Research Laboratory (ARL) has recently concluded a research experiment to study the benefits of multimodal sensor fusion for improved hostile-fire-defeat (HFD) in an urban setting. This joint effort was led by ARL in partnership with other R&D centers and private industry. The primary goals were to detect hostile fire events (small arms, mortars, rockets, IEDs) and hostile human activities by providing solutions before, during, and after the events to improve sensor networking technologies; to develop multimodal sensor data fusion; and to determine effective dissemination techniques for the resultant actionable intelligence. Technologies included ultraviolet, infrared, retroreflection, visible, glint, Laser Detection and Ranging (LADAR), radar, acoustic, seismic, E-field, magnetic, and narrowband emission technologies; all were found to provide useful performance. The experiment demonstrated that combing data and information from diverse sensor modalities can significantly improve the accuracy of threat detections and the effectiveness of the threat response. It also demonstrated that dispersing sensors over a wide range of platforms (fixed site, ground vehicles, unmanned ground and aerial vehicles, aerostat, Soldier-worn) added flexibility and agility in tracking hostile actions. In all, the experiment demonstrated that multimodal fusion will improve hostile event responses, strike force efficiency, and force protection effectiveness.

  6. Integration of multiple sensor fusion in controller design.

    PubMed

    Abdelrahman, Mohamed; Kandasamy, Parameshwaran

    2003-04-01

    The main focus of this research is to reduce the risk of a catastrophic response of a feedback control system when some of the feedback data from the system sensors are not reliable, while maintaining a reasonable performance of the control system. In this paper a methodology for integrating multiple sensor fusion into the controller design is presented. The multiple sensor fusion algorithm produces, in addition to the estimate of the measurand, a parameter that measures the confidence in the estimated value. This confidence is integrated as a parameter into the controller to produce fast system response when the confidence in the estimate is high, and a slow response when the confidence in the estimate is low. Conditions for the stability of the system with the developed controller are discussed. This methodology is demonstrated on a cupola furnace model. The simulations illustrate the advantages of the new methodology. PMID:12708539

  7. Multi-sensor data fusion framework for CNC machining monitoring

    NASA Astrophysics Data System (ADS)

    Duro, João A.; Padget, Julian A.; Bowen, Chris R.; Kim, H. Alicia; Nassehi, Aydin

    2016-01-01

    Reliable machining monitoring systems are essential for lowering production time and manufacturing costs. Existing expensive monitoring systems focus on prevention/detection of tool malfunctions and provide information for process optimisation by force measurement. An alternative and cost-effective approach is monitoring acoustic emissions (AEs) from machining operations by acting as a robust proxy. The limitations of AEs include high sensitivity to sensor position and cutting parameters. In this paper, a novel multi-sensor data fusion framework is proposed to enable identification of the best sensor locations for monitoring cutting operations, identifying sensors that provide the best signal, and derivation of signals with an enhanced periodic component. Our experimental results reveal that by utilising the framework, and using only three sensors, signal interpretation improves substantially and the monitoring system reliability is enhanced for a wide range of machining parameters. The framework provides a route to overcoming the major limitations of AE based monitoring.

  8. Sensor fusion II: Human and machine strategies; Proceedings of the Meeting, Philadelphia, PA, Nov. 6-9, 1989

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1990-01-01

    Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.

  9. Sensor Fusion for Nuclear Proliferation Activity Monitoring

    SciTech Connect

    Adel Ghanem, Ph D

    2007-03-30

    The objective of Phase 1 of this STTR project is to demonstrate a Proof-of-Concept (PoC) of the Geo-Rad system that integrates a location-aware SmartTag (made by ZonTrak) and a radiation detector (developed by LLNL). It also includes the ability to transmit the collected radiation data and location information to the ZonTrak server (ZonService). The collected data is further transmitted to a central server at LLNL (the Fusion Server) to be processed in conjunction with overhead imagery to generate location estimates of nuclear proliferation and radiation sources.

  10. A weighting/threshold approach to sensor fusion

    SciTech Connect

    Amai, W.A.

    1988-01-01

    A weighting/threshold-based sensor fusion algorithm to decrease the false alarm rate (FAR) while maintaining a high probability of detection (PD) is being tested in the Remote Security Station (RSS). The RSS is being developed to provide temporary intrusion-detection capability on short notice. It consists of a portable, multisensor pod connected by cable to a manned control console. The pod is set up outdoors in the location that security is needed; the console and operator are located in a command bunker up to a kilometer away. The RSS software filters out alarms from low-believability sensors and also filters out alarms in low-priority areas. Each sensor's believability is proportionally enclosed as a weighing, which is continually updated as a function of the environmental conditions affecting that sensor. Area priority is proportionally encoded as a threshold value for each pie-wedge area around the pod. When an event in an area triggers one or more sensors, their weightings are summed and then compared to the area threshold value. The operator is informed of the event only if the summed weighting exceeds the threshold. Extensive field testing has not yet been done, but some results show the current sensor fusion algorithm decreases the FAR at the expense of lowering the PD. To increase the PD while retaining a low FAR, the weighting/threshold algorithm will be modified to use temporal data and pattern recognition. 4 refs., 2 figs.

  11. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while

  12. A Data Fusion Method in Wireless Sensor Networks

    PubMed Central

    Izadi, Davood; Abawajy, Jemal H.; Ghanavati, Sara; Herawan, Tutut

    2015-01-01

    The success of a Wireless Sensor Network (WSN) deployment strongly depends on the quality of service (QoS) it provides regarding issues such as data accuracy, data aggregation delays and network lifetime maximisation. This is especially challenging in data fusion mechanisms, where a small fraction of low quality data in the fusion input may negatively impact the overall fusion result. In this paper, we present a fuzzy-based data fusion approach for WSN with the aim of increasing the QoS whilst reducing the energy consumption of the sensor network. The proposed approach is able to distinguish and aggregate only true values of the collected data as such, thus reducing the burden of processing the entire data at the base station (BS). It is also able to eliminate redundant data and consequently reduce energy consumption thus increasing the network lifetime. We studied the effectiveness of the proposed data fusion approach experimentally and compared it with two baseline approaches in terms of data collection, number of transferred data packets and energy consumption. The results of the experiments show that the proposed approach achieves better results than the baseline approaches. PMID:25635417

  13. Sensor-fusion-based biometric identity verification

    SciTech Connect

    Carlson, J.J.; Bouchard, A.M.; Osbourn, G.C.; Martinez, R.F.; Bartholomew, J.W.; Jordan, J.B.; Flachs, G.M.; Bao, Z.; Zhu, L.

    1998-02-01

    Future generation automated human biometric identification and verification will require multiple features/sensors together with internal and external information sources to achieve high performance, accuracy, and reliability in uncontrolled environments. The primary objective of the proposed research is to develop a theoretical and practical basis for identifying and verifying people using standoff biometric features that can be obtained with minimal inconvenience during the verification process. The basic problem involves selecting sensors and discovering features that provide sufficient information to reliably verify a person`s identity under the uncertainties caused by measurement errors and tactics of uncooperative subjects. A system was developed for discovering hand, face, ear, and voice features and fusing them to verify the identity of people. The system obtains its robustness and reliability by fusing many coarse and easily measured features into a near minimal probability of error decision algorithm.

  14. Multisensor fusion using the sensor algorithm research expert system

    NASA Astrophysics Data System (ADS)

    Bullock, Michael E.; Miltonberger, Thomas W.; Reinholdsten, Paul A.; Wilson, Kathleen

    1991-08-01

    A method for object recognition using a multisensor model-based approach has been developed. The sensor algorithm research expert system (SARES) is a sun-based workstation for model-based object recognition algorithm development. SARES is a means to perform research into multiple levels of geometric and scattering models, image and signal feature extraction, hypothesis management, and matching strategies. SARES multisensor fusion allows for multiple geometric representations and decompositions, and sensor location transformations, as well as feature prediction, matching, and evidence accrual. It is shown that the fusion algorithm can exploit the synergistic information contained in IR and synthetic aperture radar (SAR) imagery yielding increased object recognition accuracy and confidence over single sensor exploitation alone. The fusion algorithm has the added benefit of reducing the number of computations by virtue of simplified object model combinatorics. That is, the additional sensor information eliminates a large number of the incorrect object hypotheses early in the algorithm. This provides a focus of attention to those object hypotheses which are closest to the correct hypothesis.

  15. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  16. Coresident sensor fusion and compression using the wavelet transform

    SciTech Connect

    Yocky, D.A.

    1996-03-11

    Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.

  17. Diagnostics and data fusion of robotic sensors

    SciTech Connect

    Dhar, M.; Bardsley, S; Cowper, L.; Hamm, R.; Jammu, V.; Wagner, J.

    1996-12-31

    Robotic systems for remediation of hazardous waste sites must be highly reliable to avoid equipment failures and subsequent possible exposure of personnel to hazardous environments. Safe, efficient cleanup operations also require accurate, complete knowledge of the task space. This paper presents progress made on a 18 month program to meet these needs. To enhance robot reliability, a conceptual design of a monitoring and diagnostic system is being developed to predict the onset of mechanical failure modes, provide maximum lead time to make operational changes or repairs, and minimize the occurrence of on-site breakdowns. To ensure safe operation, a comprehensive software package is being developed that will fuse data from multiple surface mapping sensors and poses so as to reduce the error effects in individual data points and provide accurate 3-D maps of a work space.

  18. Use of data fusion to optimize contaminant transport predictions

    SciTech Connect

    Eeckhout, E. van

    1997-10-01

    The original data fusion workstation, as envisioned by Coleman Research Corp., was constructed under funding from DOE (EM-50) in the early 1990s. The intent was to demonstrate the viability of fusion and analysis of data from various types of sensors for waste site characterization, but primarily geophysical. This overall concept changed over time and evolved more towards hydrogeological (groundwater) data fusion after some initial geophysical fusion work focused at Coleman. This initial geophysical fusion platform was tested at Hanford and Fernald, and the later hydrogeological fusion work has been demonstrated at Pantex, Savannah River, the US Army Letterkenny Depot, a DoD Massachusetts site and a DoD California site. The hydrogeologic data fusion package has been spun off to a company named Fusion and Control Technology, Inc. This package is called the Hydrological Fusion And Control Tool (Hydro-FACT) and is being sold as a product that links with the software package, MS-VMS (MODFLOW-SURFACT Visual Modeling System), sold by HydroGeoLogic, Inc. MODFLOW is a USGS development, and is in the public domain. Since the government paid for the data fusion development at Coleman, the government and their contractors have access to the data fusion technology in this hydrogeologic package for certain computer platforms, but would probably have to hire FACT (Fusion and Control Technology, Inc.,) and/or HydroGeoLogic for some level of software and services. Further discussion in this report will concentrate on the hydrogeologic fusion module that is being sold as Hydro-FACT, which can be linked with MS-VMS.

  19. Sensor fusion by pseudo information measure: a mobile robot application.

    PubMed

    Asharif, Mohammad Reza; Moshiri, Behzad; HoseinNezhad, Reza

    2002-07-01

    In any autonomous mobile robot, one of the most important issues to be designed and implemented is environment perception. In this paper, a new approach is formulated in order to perform sensory data integration for generation of an occupancy grid map of the environment. This method is an extended version of the Bayesian fusion method for independent sources of information. The performance of the proposed method of fusion and its sensitivity are discussed. Map building simulation for a cylindrical robot with eight ultrasonic sensors and mapping implementation for a Khepera robot have been separately tried in simulation and experimental works. A new neural structure is introduced for conversion of proximity data that are given by Khepera IR sensors to occupancy probabilities. Path planning experiments have also been applied to the resulting maps. For each map, two factors are considered and calculated: the fitness and the augmented occupancy of the map with respect to the ideal map. The length and the least distance to obstacles were the other two factors that were calculated for the routes that are resulted by path planning experiments. Experimental and simulation results show that by using the new fusion formulas, more informative maps of the environment are obtained. By these maps more appropriate routes could be achieved. Actually, there is a tradeoff between the length of the resulting routes and their safety and by choosing the proper fusion function, this tradeoff is suitably tuned for different map building applications. PMID:12160343

  20. Optimal sensor placement in structural health monitoring using discrete optimization

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Büyüköztürk, Oral

    2015-12-01

    The objective of optimal sensor placement (OSP) is to obtain a sensor layout that gives as much information of the dynamic system as possible in structural health monitoring (SHM). The process of OSP can be formulated as a discrete minimization (or maximization) problem with the sensor locations as the design variables, conditional on the constraint of a given sensor number. In this paper, we propose a discrete optimization scheme based on the artificial bee colony algorithm to solve the OSP problem after first transforming it into an integer optimization problem. A modal assurance criterion-oriented objective function is investigated to measure the utility of a sensor configuration in the optimization process based on the modal characteristics of a reduced order model. The reduced order model is obtained using an iterated improved reduced system technique. The constraint is handled by a penalty term added to the objective function. Three examples, including a 27 bar truss bridge, a 21-storey building at the MIT campus and the 610 m high Canton Tower, are investigated to test the applicability of the proposed algorithm to OSP. In addition, the proposed OSP algorithm is experimentally validated on a physical laboratory structure which is a three-story two-bay steel frame instrumented with triaxial accelerometers. Results indicate that the proposed method is efficient and can be potentially used in OSP in practical SHM.

  1. Spectral photoplethysmographic imaging sensor fusion for enhanced heart rate detection

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Clausi, David A.; Wong, Alexander

    2016-03-01

    Continuous heart rate monitoring can provide important context for quantitative clinical assessment in scenarios such as long-term health monitoring and disability prevention. Photoplethysmographic imaging (PPGI) systems are particularly useful for such monitoring scenarios as contact-based devices pose problems related to comfort and mobility. Each pixel can be regarded as a virtual PPG sensor, thus enabling simultaneous measurements of multiple skin sites. Existing PPGI systems analyze temporal PPGI sensor uctuations related to hemodynamic pulsations across a region of interest to extract the blood pulse signal. However, due to spatially varying optical properties of the skin, the blood pulse signal may not be consistent across all PPGI sensors, leading to inaccurate heart rate monitoring. To increase the hemodynamic signal-to-noise ratio (SNR), we propose a novel spectral PPGI sensor fusion method for enhanced estimation of the true blood pulse signal. Motivated by the observation that PPGI sensors with high hemodynamic SNR exhibit a spectral energy peak at the heart rate frequency, an entropy-based fusion model was formulated to combine PPGI sensors based on the sensors' spectral energy distribution. The optical PPGI device comprised a near infrared (NIR) sensitive camera and an 850 nm LED. Spatially uniform irradiance was achieved by placing optical elements along the LED beam, providing consistent illumination across the skin area. Dual-mode temporally coded illumination was used to negate the temporal effect of ambient illumination. Experimental results show that the spectrally weighted PPGI method can accurately and consistently extract heart rate information where traditional region-based averaging fails.

  2. Inertial Sensor Error Reduction through Calibration and Sensor Fusion.

    PubMed

    Lambrecht, Stefan; Nogueira, Samuel L; Bortole, Magdo; Siqueira, Adriano A G; Terra, Marco H; Rocon, Eduardo; Pons, José L

    2016-01-01

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198

  3. Inertial Sensor Error Reduction through Calibration and Sensor Fusion

    PubMed Central

    Lambrecht, Stefan; Nogueira, Samuel L.; Bortole, Magdo; Siqueira, Adriano A. G.; Terra, Marco H.; Rocon, Eduardo; Pons, José L.

    2016-01-01

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198

  4. Sensor fusion method for off-road vehicle position estimation

    NASA Astrophysics Data System (ADS)

    Guo, Linsong; Zhang, Qin; Han, Shufeng

    2002-07-01

    A FOG-aided GPS fusion system was developed for positioning an off-road vehicle, which consists of a six-axis inertial measurement unit (IMU) and a Garmin global positioning system (GPS). An observation-based Kalman filter was designed to integrate the readings from both sensors so that the noise in GPS signal was smoothed out, the redundant information was fused and a high update rate of output signals was obtained. The drift error of FOG was also compensated. By using this system, a low cost GPS can be used to replace expensive GPS with a higher accuracy. Measurement and fusion results showed that the positioning error of the vehicle estimated using this fusion system was greatly reduced from a GPS-only system. At a vehicle speed of about 1.34 m/s, the mean bias in East axis of the fusion system was 0.48 m comparing to the GPS mean bias of 1.28 m, and the mean bias in North axis was reduced to 0.32 m from 1.48 m. The update frequency of the fusion system was increased to 9 Hz from 1 Hz of the GPS. A prototype system was installed on a sprayer for vehicle positioning measurement.

  5. Neural network fusion and inversion model for NDIR sensor measurement

    NASA Astrophysics Data System (ADS)

    Cieszczyk, Sławomir; Komada, Paweł

    2015-12-01

    This article presents the problem of the impact of environmental disturbances on the determination of information from measurements. As an example, NDIR sensor is studied, which can measure industrial or environmental gases of varying temperature. The issue of changes of influence quantities value appears in many industrial measurements. Developing of appropriate algorithms resistant to conditions changes is key problem. In the resulting mathematical model of inverse problem additional input variables appears. Due to the difficulties in the mathematical description of inverse model neural networks have been applied. They do not require initial assumptions about the structure of the created model. They provide correction of sensor non-linearity as well as correction of influence of interfering quantity. The analyzed issue requires additional measurement of disturbing quantity and its connection with measurement of primary quantity. Combining this information with the use of neural networks belongs to the class of sensor fusion algorithm.

  6. Sensor fusion using neural network in the robotic welding

    SciTech Connect

    Ohshima, Kenji; Yabe, Masaaki; Akita, Kazuya; Kugai, Katsuya; Yamane, Satoshi; Kubota, Takefumi

    1995-12-31

    It is important to realize intelligent welding robots to obtain a good quality of the welding results. For this purpose, it is required to detect the torch height, the torch attitude, the deviation from the center of the gap. In order to simultaneously detect those, the authors propose the sensor fusion by using the neural network, i.e., the information concerning the welding torch is detected by using both the welding current and the welding voltage. First, the authors deal with the welding phenomena as the melting phenomena in the electrode wire of the MIG welding and the CO{sub 2} short circuiting welding. Next, the training data of the neutral networks are made from the numerical simulations. The neuro arc sensor is trained so as to get the desired performance of the sensor. By using it, the seam tracking is carried out in the T-joint.

  7. Variance estimation for radiation analysis and multi-sensor fusion.

    SciTech Connect

    Mitchell, Dean James

    2010-09-01

    Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.

  8. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  9. Optimizing Retransmission Threshold in Wireless Sensor Networks.

    PubMed

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is O n Δ · max 1 ≤ i ≤ n { u i } , where u i is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based ( 1 + p m i n ) -approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O ( 1 ) -approximation algorithm with time complexity O ( 1 ) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  10. Signal processing, sensor fusion, and target recognition; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992

    NASA Astrophysics Data System (ADS)

    Libby, Vibeke; Kadar, Ivan

    Consideration is given to a multiordered mapping technique for target prioritization, a neural network approach to multiple-target-tracking problems, a multisensor fusion algorithm for multitarget multibackground classification, deconvolutiom of multiple images of the same object, neural networks and genetic algorithms for combinatorial optimization of sensor data fusion, classification of atmospheric acoustic signals from fixed-wing aircraft, and an optics approach to sensor fusion for target recognition. Also treated are a zoom lens for automatic target recognition, a hybrid model for the analysis of radar sensors, an innovative test bed for developing and assessing air-to-air noncooperative target identification algorithms, SAR imagery scene segmentation using fractal processing, sonar feature-based bandwidth compression, laboratory experiments for a new sonar system, computational algorithms for discrete transform using fixed-size filter matrices, and pattern recognition for power systems.

  11. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  12. Multi-Sensor Fusion of Landsat 8 Thermal Infrared (TIR) and Panchromatic (PAN) Images

    PubMed Central

    Jung, Hyung-Sup; Park, Sung-Whan

    2014-01-01

    Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change. PMID:25529207

  13. Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.

    PubMed

    Jung, Hyung-Sup; Park, Sung-Whan

    2014-01-01

    Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change. PMID:25529207

  14. Infrared sensors and sensor fusion; Proceedings of the Meeting, Orlando, FL, May 19-21, 1987

    SciTech Connect

    Buser, R.G.; Warren, F.B.

    1987-01-01

    The present conference discusses topics in the fields of IR sensor multifunctional design; image modeling, simulation, and detection; IR sensor configurations and components; thermal sensor arrays; silicide-based IR sensors; and IR focal plane array utilization. Attention is given to the fusion of lidar and FLIR for target segmentation and enhancement, the synergetic integration of thermal and visual images for computer vision, the 'Falcon Eye' FLIR system, multifunctional electrooptics and multiaperture sensors for precision-guided munitions, and AI approaches to data integration. Also discussed are the comparative performance of Ir silicide and Pt silicide photodiodes, high fill-factor silicide monolithic arrays, and the characterization of noise in staring IR focal plane arrays.

  15. Discrete Kalman Filter based Sensor Fusion for Robust Accessibility Interfaces

    NASA Astrophysics Data System (ADS)

    Ghersi, I.; Mariño, M.; Miralles, M. T.

    2016-04-01

    Human-machine interfaces have evolved, benefiting from the growing access to devices with superior, embedded signal-processing capabilities, as well as through new sensors that allow the estimation of movements and gestures, resulting in increasingly intuitive interfaces. In this context, sensor fusion for the estimation of the spatial orientation of body segments allows to achieve more robust solutions, overcoming specific disadvantages derived from the use of isolated sensors, such as the sensitivity of magnetic-field sensors to external influences, when used in uncontrolled environments. In this work, a method for the combination of image-processing data and angular-velocity registers from a 3D MEMS gyroscope, through a Discrete-time Kalman Filter, is proposed and deployed as an alternate user interface for mobile devices, in which an on-screen pointer is controlled with head movements. Results concerning general performance of the method are presented, as well as a comparative analysis, under a dedicated test application, with results from a previous version of this system, in which the relative-orientation information was acquired directly from MEMS sensors (3D magnetometer-accelerometer). These results show an improved response for this new version of the pointer, both in terms of precision and response time, while keeping many of the benefits that were highlighted for its predecessor, giving place to a complementary method for signal acquisition that can be used as an alternative-input device, as well as for accessibility solutions.

  16. Optimal topologies for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Tillett, Jason C.; Yang, Shanchieh J.; Rao, Raghuveer M.; Sahin, Ferat

    2004-11-01

    Since untethered sensor nodes operate on battery, and because they must communicate through a multi-hop network, it is vital to optimally configure the transmit power of the nodes both to conserve power and optimize spatial reuse of a shared channel. Current topology control algorithms try to minimize radio power while ensuring connectivity of the network. We propose that another important metric for a sensor network topology will involve consideration of hidden nodes and asymmetric links. Minimizing the number of hidden nodes and asymmetric links at the expense of increasing the transmit power of a subset of the nodes may in fact increase the longevity of the sensor network. In this paper we explore a distributed evolutionary approach to optimizing this new metric. Inspiration from the Particle Swarm Optimization technique motivates a distributed version of the algorithm. We generate topologies with fewer hidden nodes and asymmetric links than a comparable algorithm and present some results that indicate that our topologies deliver more data and last longer.

  17. Optimal integral controller with sensor failure accommodation

    NASA Technical Reports Server (NTRS)

    Alberts, T.; Houlihan, T.

    1989-01-01

    An Optimal Integral Controller that readily accommodates Sensor Failure - without resorting to (Kalman) filter or observer generation - has been designed. The system is based on Navy-sponsored research for the control of high performance aircraft. In conjunction with a NASA developed Numerical Optimization Code, the Integral Feedback Controller will provide optimal system response even in the case of incomplete state feedback. Hence, the need for costly replication of plant sensors is avoided since failure accommodation is effected by system software reconfiguration. The control design has been applied to a particularly ill-behaved, third-order system. Dominant-root design in the classical sense produced an almost 100 percent overshoot for the third-order system response. An application of the newly-developed Optimal Integral Controller - assuming all state information available - produces a response with no overshoot. A further application of the controller design - assuming a one-third sensor failure scenario - produced a slight overshoot response that still preserved the steady state time-point of the full-state feedback response. The control design should have wide application in space systems.

  18. Convolutional neural network based sensor fusion for forward looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Rayn; Crosskey, Miles; Chen, David; Walenz, Brett; Morton, Kenneth

    2016-05-01

    Forward looking ground penetrating radar (FLGPR) is an alternative buried threat sensing technology designed to offer additional standoff compared to downward looking GPR systems. Due to additional flexibility in antenna configurations, FLGPR systems can accommodate multiple sensor modalities on the same platform that can provide complimentary information. The different sensor modalities present challenges in both developing informative feature extraction methods, and fusing sensor information in order to obtain the best discrimination performance. This work uses convolutional neural networks in order to jointly learn features across two sensor modalities and fuse the information in order to distinguish between target and non-target regions. This joint optimization is possible by modifying the traditional image-based convolutional neural network configuration to extract data from multiple sources. The filters generated by this process create a learned feature extraction method that is optimized to provide the best discrimination performance when fused. This paper presents the results of applying convolutional neural networks and compares these results to the use of fusion performed with a linear classifier. This paper also compares performance between convolutional neural networks architectures to show the benefit of fusing the sensor information in different ways.

  19. Fusion of radar and optical sensors for space robotic vision

    NASA Technical Reports Server (NTRS)

    Shaw, Scott W.; Defigueiredo, Rui J. P.; Krishen, Kumar

    1988-01-01

    Returned radar power estimates are used in an iterative procedure which generates successive approximations to the target shape in order to determine the shape of a 3-D surface. A simulation is shown which involves the reconstruction of an edge of a flat plate. Although this is a somewhat artificial example, it addresses the real problem of recovering edges of space objects lost in shadow or against a dark background. The results indicate that a microwave/optical sensor fusion system is possible, given sufficient computing power and accurate radar cross section measuring systems.

  20. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  1. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  2. Sensor fusion for on-line monitoring of yoghurt fermentation.

    PubMed

    Cimander, Christian; Carlsson, Maria; Mandenius, Carl-Fredrik

    2002-11-13

    Measurement data from an electronic nose (EN), a near-infrared spectrometer (NIRS) and standard bioreactor probes were used to follow the course of lab-scale yoghurt fermentation. The sensor signals were fused using a cascade neural network: a primary network predicted quantitative process variables, including lactose, galactose and lactate; a secondary network predicted a qualitative process state variable describing critical process phases, such as the onset of coagulation or the harvest time. Although the accuracy of the neural network prediction was acceptable and comparable with the off-line reference assay, its stability and performance were significantly improved by correction of faulty data. The results demonstrate that on-line sensor fusion with the chosen analyzers improves monitoring and quality control of yoghurt fermentation with implications to other fermentation processes. PMID:12385712

  3. Distributed data fusion across multiple hard and soft mobile sensor platforms

    NASA Astrophysics Data System (ADS)

    Sinsley, Gregory

    One of the biggest challenges currently facing the robotics field is sensor data fusion. Unmanned robots carry many sophisticated sensors including visual and infrared cameras, radar, laser range finders, chemical sensors, accelerometers, gyros, and global positioning systems. By effectively fusing the data from these sensors, a robot would be able to form a coherent view of its world that could then be used to facilitate both autonomous and intelligent operation. Another distinct fusion problem is that of fusing data from teammates with data from onboard sensors. If an entire team of vehicles has the same worldview they will be able to cooperate much more effectively. Sharing worldviews is made even more difficult if the teammates have different sensor types. The final fusion challenge the robotics field faces is that of fusing data gathered by robots with data gathered by human teammates (soft sensors). Humans sense the world completely differently from robots, which makes this problem particularly difficult. The advantage of fusing data from humans is that it makes more information available to the entire team, thus helping each agent to make the best possible decisions. This thesis presents a system for fusing data from multiple unmanned aerial vehicles, unmanned ground vehicles, and human observers. The first issue this thesis addresses is that of centralized data fusion. This is a foundational data fusion issue, which has been very well studied. Important issues in centralized fusion include data association, classification, tracking, and robotics problems. Because these problems are so well studied, this thesis does not make any major contributions in this area, but does review it for completeness. The chapter on centralized fusion concludes with an example unmanned aerial vehicle surveillance problem that demonstrates many of the traditional fusion methods. The second problem this thesis addresses is that of distributed data fusion. Distributed data fusion

  4. Information fusion based optimal control for large civil aircraft system.

    PubMed

    Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen

    2015-03-01

    Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase. PMID:25440950

  5. Sensor fusion methodology for remote detection of buried land mines

    SciTech Connect

    Del Grande, N.

    1990-04-01

    We are investigation a sensor fusion methodology for remote detection of buried land mines. Our primary approach is sensor intrafusion. Our dual-channel passive IR methodology decouples true (corrected) surface temperature variations of 0.2{degree}C from spatially dependent surface emissivity noise. It produces surface temperature maps showing patterns of conducted heat from buried objects which heat and cool differently from their surroundings. Our methodology exploits Planck's radiation law. It produces separate maps of surface emissivity variations which allow us to reduce false alarms. Our secondary approach is sensor interfusion using other methodologies. For example, an active IR CO{sub 2} laser reflectance channel helps distinguish surface targets unrelated to buried land mines at night when photographic methods are ineffective. Also, the interfusion of ground penetrating radar provides depth information for confirming the site of buried objects. Together with EG G in Las Vegas, we flew a mission at Nellis AFB using the Daedalus dual-channel (5 and 10 micron) IR scanner mounted on a helicopter platform at an elevation of 60 m above the desert sand. We detected surface temperature patterns associated with buried (inert) land mines covered by as much as 10 cm of dry sand. The respective spatial, spectral, thermal, emissivity and temporal signatures associated with buried targets differed from those associated with surface vegetation, rocks and manmade objects. Our results were consistent with predictions based on the annual Temperature Wave Model.They were confirmed by field measurements. The dual-channel sensor fusion methodology is expected to enhance the capabilities of the military and industrial community for standoff mine detection. Other important potential applications are open skies, drug traffic control and environmental restoration at waste burial sites. 11 figs.

  6. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  7. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  8. Context-Aided Sensor Fusion for Enhanced Urban Navigation

    PubMed Central

    Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María

    2012-01-01

    The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. PMID:23223080

  9. Designing Sensor Networks by a Generalized Highly Optimized Tolerance Model

    NASA Astrophysics Data System (ADS)

    Miyano, Takaya; Yamakoshi, Miyuki; Higashino, Sadanori; Tsutsui, Takako

    A variant of the highly optimized tolerance model is applied to a toy problem of bioterrorism to determine the optimal arrangement of hypothetical bio-sensors to avert epidemic outbreak. Nonlinear loss function is utilized in searching the optimal structure of the sensor network. The proposed method successfully averts disastrously large events, which can not be achieved by the original highly optimized tolerance model.

  10. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA. PMID:27408827

  11. An Approach to Optimize the Fusion Coefficients for Land Cover Information Enhancement with Multisensor Data

    NASA Astrophysics Data System (ADS)

    Garg, Akanksha; Brodu, Nicolas; Yahia, Hussein; Singh, Dharmendra

    2016-04-01

    This paper explores a novel data fusion method with the application of Machine Learning approach for optimal weighted fusion of multisensor data. It will help to get the maximum information of any land cover. Considerable amount of research work has been carried out on multisensor data fusion but getting an optimal fusion for enhancement of land cover information using random weights is still ambiguous. Therefore, there is a need of such land cover monitoring system which can provide the maximum information of the land cover, generally which is not possible with the help of single sensor data. There is a necessity to develop such techniques by which information of multisensor data can be utilized optimally. Machine learning is one of the best way to optimize this type of information. So, in this paper, the weights of each sensor data have been critically analyzed which is required for the fusion, and observed that weights are quite sensitive in fusion. Therefore, different combinations of weights have been tested exhaustively in the direction to develop a relationship between weights and classification accuracy of the fused data. This relationship can be optimized through machine learning techniques like SVM (Support Vector Machine). In the present study, this experiment has been carried out for PALSAR (Phased Array L-Band Synthetic Aperture RADAR) and MODIS (Moderate Resolution Imaging Spectroradiometer) data. PALSAR is a fully polarimetric data with HH, HV and VV polarizations at good spatial resolution (25m), and NDVI (Normalized Difference Vegetation Index) is a good indicator of vegetation, utilizing different bands (Red and NIR) of freely available MODIS data at 250m resolution. First of all, resolution of NDVI has been enhanced from 250m to 25m (10 times) using modified DWT (Modified Discrete Wavelet Transform) to bring it on the same scale as that of PALSAR. Now, different polarized PALSAR data (HH, HV, VV) have been fused with resolution enhanced NDVI

  12. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  13. Improvement of KinectTM Sensor Capabilities by Fusion with Laser Sensing Data Using Octree

    PubMed Central

    Chávez, Alfredo; Karstoft, Henrik

    2012-01-01

    To enhance sensor capabilities, sensor data readings from different modalities must be fused. The main contribution of this paper is to present a sensor data fusion approach that can reduce KinectTM sensor limitations. This approach involves combining laser with KinectTM sensors. Sensor data is modelled in a 3D environment based on octrees using a probabilistic occupancy estimation. The Bayesian method, which takes into account the uncertainty inherent in the sensor measurements, is used to fuse the sensor information and update the 3D octree map. The sensor fusion yields a significant increase of the field of view of the KinectTM sensor that can be used for robot tasks. PMID:22666006

  14. Imaging sensor fusion and enhanced vision for helicopter landing operations

    NASA Astrophysics Data System (ADS)

    Hebel, Marcus; Bers, Karlheinz; Jäger, Klaus

    2006-05-01

    An automatic target recognition system has been assembled and tested at the Research Institute for Optronics and Pattern Recognition in Germany over the last years. Its multisensorial design comprises off-the-shelf components: an FPA infrared camera, a scanning laser radar und an inertial measurement unit. In the paper we describe several possibilities for the use of this multisensor equipment during helicopter missions. We discuss suitable data processing methods, for instance the automatic time synchronization of different imaging sensors, the pixel-based data fusion and the incorporation of collateral information. The results are visualized in an appropriate way to present them on a cockpit display. We also show how our system can act as a landing aid for pilots within brownout conditions (dust clouds caused by the landing helicopter).

  15. Optimization of Microelectronic Devices for Sensor Applications

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    2000-01-01

    The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.

  16. Knowledge assistant: A sensor fusion framework for robotic environmental characterization

    SciTech Connect

    Feddema, J.T.; Rivera, J.J.; Tucker, S.D.

    1996-12-01

    A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and post analysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neural network, and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g. estimated dimensions, weight, material composition, etc.) are displayed in the world model. This paper highlights the major components of this system.

  17. Robust sensor fusion of unobtrusively measured heart rate.

    PubMed

    Wartzek, Tobias; Brüser, Christoph; Walter, Marian; Leonhardt, Steffen

    2014-03-01

    Contactless vital sign measurement technologies often have the drawback of severe motion artifacts and periods in which no signal is available. However, using several identical or physically different sensors, redundancy can be used to decrease the error in noncontact heart rate estimation, while increasing the time period during which reliable data are available. In this paper, we show for the first time two major results in case of contactless heart rate measurements deduced from a capacitive ECG and optical pulse signals. First, an artifact detection is an essential preprocessing step to allow a reliable fusion. Second, the robust but computationally efficient median already provides good results; however, using a Bayesian approach, and a short time estimation of the variance, best results in terms of difference to reference heart rate and temporal coverage can be achieved. In this paper, six sensor signals were used and coverage increased from 0-90% to 80-94%, while the difference between the estimated heart rate and the gold standard was less than ±2 BPM. PMID:24608065

  18. An approach to optimal hyperspectral and multispectral signature and image fusion for detecting hidden targets on shorelines

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.

    2015-10-01

    Hyperspectral and multispectral imagery of shorelines collected from airborne and shipborne platforms are used following pushbroom imagery corrections using inertial motion motions units and augmented global positioning data and Kalman filtering. Corrected radiance or reflectance images are then used to optimize synthetic high spatial resolution spectral signatures resulting from an optimized data fusion process. The process demonstrated utilizes littoral zone features from imagery acquired in the Gulf of Mexico region. Shoreline imagery along the Banana River, Florida, is presented that utilizes a technique that makes use of numerically embedded targets in both higher spatial resolution multispectral images and lower spatial resolution hyperspectral imagery. The fusion process developed utilizes optimization procedures that include random selection of regions and pixels in the imagery, and minimizing the difference between the synthetic signatures and observed signatures. The optimized data fusion approach allows detection of spectral anomalies in the resolution enhanced data cubes. Spectral-spatial anomaly detection is demonstrated using numerically embedded line targets within actual imagery. The approach allows one to test spectral signature anomaly detection and to identify features and targets. The optimized data fusion techniques and software allows one to perform sensitivity analysis and optimization in the singular value decomposition model building process and the 2-D Butterworth cutoff frequency and order numerical selection process. The data fusion "synthetic imagery" forms a basis for spectral-spatial resolution enhancement for optimal band selection and remote sensing algorithm development within "spectral anomaly areas". Sensitivity analysis demonstrates the data fusion methodology is most sensitive to (a) the pixels and features used in the SVD model building process and (b) the 2-D Butterworth cutoff frequency optimized by application of K

  19. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.

    1992-11-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  20. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  1. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  2. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  3. Application of Multi-Sensor Information Fusion Method Based on Rough Sets and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Xue, Jinxue; Wang, Guohu; Wang, Xiaoqiang; Cui, Fengkui

    In order to improve the precision and date processing speed of multi-sensor information fusion, a kind of multi-sensor data fusion process algorithm has been studied in this paper. First, based on rough set theory (RS) to attribute reduction the parameter set, we use the advantages of rough set theory in dealing with large amount of data to eliminate redundant information. Then, the data can be trained and classified by Support Vector Machine (SYM). Experimental results showed that this method can improve the speed and accuracy of multi-sensor fusion system.

  4. A Perspective on information fusion problems, Extended Abstract, Journal of Distributed Sensor Neworks

    SciTech Connect

    Rao, Nageswara S

    2009-04-01

    Information fusion problems have a rich history spanning four centuries and several disciplines as diverse as political economy, reliability engineering, target tracking, bioinformatics, forecasting, distributed detection, robotics, cyber security, nuclear engineering, distributed sensor networks, and others. Over the past decade, the area of information fusion has been established as a discipline by itself with several contributions to its foundations as well as applications. In a basic formulation of the information fusion problem, each component is characterized by a probability distribution. The goal is to estimate a fusion rule for combining the outputs of components to achieve a specified objective such as better performance or functionality compared to the components. If the sensor error distributions are known, several fusion rule estimation problems have been formulated and solved using deterministic methods. In the area of pattern recognition a weighted majority fuser was shown to be optimal in combining outputs from pattern recognizers under statistical independence conditions. A simpler version of this problem corresponds to the Condorcet Jury theorem proposed in 1786. This result was rediscovered since then in other disciplines including by von Neumann in 1959 in building reliable computing devices. The distributed detection problem, studied extensively in the target tracking area, can be viewed as a generalization of the above two problems. In these works, the underlying distributions are assumed to be known, which is quite reasonable in the areas these methods are applied. In a different formulation, we consider estimating the fuser based on empirical data when no information is available about the underlying distributions of components. Using the empirical estimation methods, this problem is shown to be solvable in principle, and the fuser performance may be sharpened based on the specific formulation. The isolation fusers perform at least as good

  5. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  6. Estimating orientation using magnetic and inertial sensors and different sensor fusion approaches: accuracy assessment in manual and locomotion tasks.

    PubMed

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  7. Sensor fusion for the localisation of birds in flight

    NASA Astrophysics Data System (ADS)

    Millikin, Rhonda Lorraine

    Tracking and identification of birds in flight remains a goal of aviation safety worldwide and conservation in North America. Marine surveillance radar, tracking radar and more recently weather radar have been used to monitor mass movements of birds. The emphasis has been on prediction of migration fronts where thousands of birds follow weather patterns across a large geographic area. Microphones have been stationed over wide areas to receive calls of these birds and help catalogue the diversity of species comprising these migrations. A most critical feature of landbird migration is where the birds land to rest and feed. These habitats are not known and therefore cannot effectively be protected. For effective management of landbird migrants (nocturnal migrant birds), short-range flight behaviour (100--300 m above ground) is the critical air space to monitor. To ensure conservation efforts are focused on endangered species and species truly at risk, species of individual birds must be identified. Short-range monitoring of individual birds is also important for aviation safety. Up to 75% of bird-aircraft collisions occur within 500 ft (153 m) above the runway. Identification of each bird will help predict its flight path, a critical factor in the prevention of a collision. This thesis focuses on short-range identification of individual birds to localise birds in flight. This goal is achieved through fusing data from two sensor systems, radar and acoustic. This fusion provides more accurate tracking of birds in the lower airspace and allows for the identification of species of interest. In the fall of 1999, an experiment was conducted at Prince Edward Point, a southern projection of land on the north shore of Lake Ontario, to prove that the fusion of radar and acoustic sensors enhances the detection, location and tracking of nocturnal migrant birds. As these birds migrate at night, they are difficult to track visually. However, they are detectable with X

  8. Belief Function Based Decision Fusion for Decentralized Target Classification in Wireless Sensor Networks

    PubMed Central

    Zhang, Wenyu; Zhang, Zhenjiang

    2015-01-01

    Decision fusion in sensor networks enables sensors to improve classification accuracy while reducing the energy consumption and bandwidth demand for data transmission. In this paper, we focus on the decentralized multi-class classification fusion problem in wireless sensor networks (WSNs) and a new simple but effective decision fusion rule based on belief function theory is proposed. Unlike existing belief function based decision fusion schemes, the proposed approach is compatible with any type of classifier because the basic belief assignments (BBAs) of each sensor are constructed on the basis of the classifier’s training output confusion matrix and real-time observations. We also derive explicit global BBA in the fusion center under Dempster’s combinational rule, making the decision making operation in the fusion center greatly simplified. Also, sending the whole BBA structure to the fusion center is avoided. Experimental results demonstrate that the proposed fusion rule has better performance in fusion accuracy compared with the naïve Bayes rule and weighted majority voting rule. PMID:26295399

  9. Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor

    PubMed Central

    Ye, Lin; Yang, Ming; Xu, Liang; Zhuang, Xiaoqi; Dong, Zhaopeng; Li, Shiyang

    2014-01-01

    Using the finite element method (FEM) and particle swarm optimization (PSO), a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters' effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°. PMID:24590353

  10. Adaptive Multi-sensor Data Fusion Model for In-situ Exploration of Mars

    NASA Astrophysics Data System (ADS)

    Schneiderman, T.; Sobron, P.

    2014-12-01

    Laser Raman spectroscopy (LRS) and laser-induced breakdown spectroscopy (LIBS) can be used synergistically to characterize the geochemistry and mineralogy of potential microbial habitats and biosignatures. The value of LRS and LIBS has been recognized by the planetary science community: (i) NASA's Mars2020 mission features a combined LRS-LIBS instrument, SuperCam, and an LRS instrument, SHERLOC; (ii) an LRS instrument, RLS, will fly on ESA's 2018 ExoMars mission. The advantages of combining LRS and LIBS are evident: (1) LRS/LIBS can share hardware components; (2) LIBS reveals the relative concentration of major (and often trace) elements present in a sample; and (3) LRS yields information on the individual mineral species and their chemical/structural nature. Combining data from LRS and LIBS enables definitive mineral phase identification with precise chemical characterization of major, minor, and trace mineral species. New approaches to data processing are needed to analyze large amounts of LRS+LIBS data efficiently and maximize the scientific return of integrated measurements. Multi-sensor data fusion (MSDF) is a method that allows for robust sample identification through automated acquisition, processing, and combination of data. It optimizes information usage, yielding a more robust characterization of a target than could be acquired through single sensor use. We have developed a prototype fuzzy logic adaptive MSDF model aimed towards the unsupervised characterization of Martian habitats and their biosignatures using LRS and LIBS datasets. Our model also incorporates fusion of microimaging (MI) data - critical for placing analyses in geological and spatial context. Here, we discuss the performance of our novel MSDF model and demonstrate that automated quantification of the salt abundance in sulfate/clay/phyllosilicate mixtures is possible through data fusion of collocated LRS, LIBS, and MI data.

  11. Multi-sensor multi-resolution image fusion for improved vegetation and urban area classification

    NASA Astrophysics Data System (ADS)

    Kumar, U.; Milesi, C.; Nemani, R. R.; Basu, S.

    2015-06-01

    In this paper, we perform multi-sensor multi-resolution data fusion of Landsat-5 TM bands (at 30 m spatial resolution) and multispectral bands of World View-2 (WV-2 at 2 m spatial resolution) through linear spectral unmixing model. The advantages of fusing Landsat and WV-2 data are two fold: first, spatial resolution of the Landsat bands increases to WV-2 resolution. Second, integration of data from two sensors allows two additional SWIR bands from Landsat data to the fused product which have advantages such as improved atmospheric transparency and material identification, for example, urban features, construction materials, moisture contents of soil and vegetation, etc. In 150 separate experiments, WV-2 data were clustered in to 5, 10, 15, 20 and 25 spectral classes and data fusion were performed with 3x3, 5x5, 7x7, 9x9 and 11x11 kernel sizes for each Landsat band. The optimal fused bands were selected based on Pearson product-moment correlation coefficient, RMSE (root mean square error) and ERGAS index and were subsequently used for vegetation, urban area and dark objects (deep water, shadows) classification using Random Forest classifier for a test site near Golden Gate Bridge, San Francisco, California, USA. Accuracy assessment of the classified images through error matrix before and after fusion showed that the overall accuracy and Kappa for fused data classification (93.74%, 0.91) was much higher than Landsat data classification (72.71%, 0.70) and WV-2 data classification (74.99%, 0.71). This approach increased the spatial resolution of Landsat data to WV-2 spatial resolution while retaining the original Landsat spectral bands with significant improvement in classification.

  12. Distributed data fusion across multiple hard and soft mobile sensor platforms

    NASA Astrophysics Data System (ADS)

    Sinsley, Gregory

    One of the biggest challenges currently facing the robotics field is sensor data fusion. Unmanned robots carry many sophisticated sensors including visual and infrared cameras, radar, laser range finders, chemical sensors, accelerometers, gyros, and global positioning systems. By effectively fusing the data from these sensors, a robot would be able to form a coherent view of its world that could then be used to facilitate both autonomous and intelligent operation. Another distinct fusion problem is that of fusing data from teammates with data from onboard sensors. If an entire team of vehicles has the same worldview they will be able to cooperate much more effectively. Sharing worldviews is made even more difficult if the teammates have different sensor types. The final fusion challenge the robotics field faces is that of fusing data gathered by robots with data gathered by human teammates (soft sensors). Humans sense the world completely differently from robots, which makes this problem particularly difficult. The advantage of fusing data from humans is that it makes more information available to the entire team, thus helping each agent to make the best possible decisions. This thesis presents a system for fusing data from multiple unmanned aerial vehicles, unmanned ground vehicles, and human observers. The first issue this thesis addresses is that of centralized data fusion. This is a foundational data fusion issue, which has been very well studied. Important issues in centralized fusion include data association, classification, tracking, and robotics problems. Because these problems are so well studied, this thesis does not make any major contributions in this area, but does review it for completeness. The chapter on centralized fusion concludes with an example unmanned aerial vehicle surveillance problem that demonstrates many of the traditional fusion methods. The second problem this thesis addresses is that of distributed data fusion. Distributed data fusion

  13. Study of data fusion algorithms applied to unattended ground sensor network

    NASA Astrophysics Data System (ADS)

    Pannetier, B.; Moras, J.; Dezert, Jean; Sella, G.

    2014-06-01

    In this paper, data obtained from wireless unattended ground sensor network are used for tracking multiple ground targets (vehicles, pedestrians and animals) moving on and off the road network. The goal of the study is to evaluate several data fusion algorithms to select the best approach to establish the tactical situational awareness. The ground sensor network is composed of heterogeneous sensors (optronic, radar, seismic, acoustic, magnetic sensors) and data fusion nodes. The fusion nodes are small hardware platforms placed on the surveillance area that communicate together. In order to satisfy operational needs and the limited communication bandwidth between the nodes, we study several data fusion algorithms to track and classify targets in real time. A multiple targets tracking (MTT) algorithm is integrated in each data fusion node taking into account embedded constraint. The choice of the MTT algorithm is motivated by the limit of the chosen technology. In the fusion nodes, the distributed MTT algorithm exploits the road network information in order to constrain the multiple dynamic models. Then, a variable structure interacting multiple model (VS-IMM) is adapted with the road network topology. This algorithm is well-known in centralized architecture, but it implies a modification of other data fusion algorithms to preserve the performances of the tracking under constraints. Based on such VS-IMM MTT algorithm, we adapt classical data fusion techniques to make it working in three architectures: centralized, distributed and hierarchical. The sensors measurements are considered asynchronous, but the fusion steps are synchronized on all sensors. Performances of data fusion algorithms are evaluated using simulated data and also validated on real data. The scenarios under analysis contain multiple targets with close and crossing trajectories involving data association uncertainties.

  14. A Radiosonde Using a Humidity Sensor Array with a Platinum Resistance Heater and Multi-Sensor Data Fusion

    PubMed Central

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-01-01

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes. PMID:23857263

  15. A radiosonde using a humidity sensor array with a platinum resistance heater and multi-sensor data fusion.

    PubMed

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-01-01

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes. PMID:23857263

  16. Dynamic reweighting of three modalities for sensor fusion.

    PubMed

    Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J

    2014-01-01

    We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ± 1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a "fixed" reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion. PMID:24498252

  17. Robust interacting multiple model algorithms based on multi-sensor fusion criteria

    NASA Astrophysics Data System (ADS)

    Zhou, Weidong; Liu, Mengmeng

    2016-01-01

    This paper is concerned with the state estimation problem for a class of Markov jump linear discrete-time stochastic systems. Three novel interacting multiple model (IMM) algorithms are proposed based on the H∞ technique, the correlation among estimation errors of mode-conditioned filters and the multi-sensor optimal information fusion criteria. Mode probabilities in the novel algorithms are derived based on the error cross-covariances instead of likelihood functions. The H∞ technique taking the place of Kalman filtering is applied to enhance the robustness of the new approaches. Theoretical analysis and Monte Carlo simulation results indicate that the proposed algorithms are effective and have an obvious advantage in velocity estimation when tracking a maneuvering target.

  18. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors.

    PubMed

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  19. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors

    PubMed Central

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  20. A New Framework for Robust Retrieval and Fusion of Active/Passive Multi-Sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula-Georgiou, E.; Bras, R. L.

    2014-12-01

    This study introduces a new inversion approach for simultaneous retrieval and optimal fusion of multi-sensor passive/active precipitation spaceborne observations relevant to the Global Precipitation Measurement (GPM) constellation of satellites. This approach uses a modern Maximum a Posteriori (MAP) Bayesian estimator and variational principles to obtain a robust estimate of the rainfall profile from multiple sources of observationally- and physically-based a priori generated databases. The MAP estimator makes use of a constrained mixed and -norm regularization that warranties improved stability and reduced estimation error compared to the classic least-squares estimators, often used in the Bayesian rainfall retrieval techniques. We demonstrate the promise of our framework via detailed algorithmic implementation using the passive and active multi-sensor observations provided by the microwave imager (TMI) and precipitation radar (PR) aboard the Tropical Rainfall Measuring Mission (TRMM) satellite. To this end, we simultaneously obtain an observationally-driven retrieval of the entire precipitation profile using the coincidental TMI-PR observations and then optimally combine it with a first guess derived from physically-consistent a priori collected database of the TMI-2A12 operational product. We elucidate the performance of our algorithm for a wide range of storm environments with a specific focus on extreme and light precipitation events over land and coastal areas for hydrologic applications. The results are also validated versus the ground based observations and the standard TRMM products in seasonal and annual timescales.

  1. Selection of intrusion detection system threshold bounds for effective sensor fusion

    NASA Astrophysics Data System (ADS)

    Thomas, Ciza; Balakrishnan, Narayanaswamy

    2007-04-01

    The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of threshold bounds according to the Chebyshev inequality priciple performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.

  2. Statistical sensor fusion of ECG data using automotive-grade sensors

    NASA Astrophysics Data System (ADS)

    Koenig, A.; Rehg, T.; Rasshofer, R.

    2015-11-01

    Driver states such as fatigue, stress, aggression, distraction or even medical emergencies continue to be yield to severe mistakes in driving and promote accidents. A pathway towards improving driver state assessment can be found in psycho-physiological measures to directly quantify the driver's state from physiological recordings. Although heart rate is a well-established physiological variable that reflects cognitive stress, obtaining heart rate contactless and reliably is a challenging task in an automotive environment. Our aim was to investigate, how sensory fusion of two automotive grade sensors would influence the accuracy of automatic classification of cognitive stress levels. We induced cognitive stress in subjects and estimated levels from their heart rate signals, acquired from automotive ready ECG sensors. Using signal quality indices and Kalman filters, we were able to decrease Root Mean Squared Error (RMSE) of heart rate recordings by 10 beats per minute. We then trained a neural network to classify the cognitive workload state of subjects from heart rate and compared classification performance for ground truth, the individual sensors and the fused heart rate signal. We obtained an increase of 5 % higher correct classification by fusing signals as compared to individual sensors, staying only 4 % below the maximally possible classification accuracy from ground truth. These results are a first step towards real world applications of psycho-physiological measurements in vehicle settings. Future implementations of driver state modeling will be able to draw from a larger pool of data sources, such as additional physiological values or vehicle related data, which can be expected to drive classification to significantly higher values.

  3. Multi-sensor Data Fusion for Improved Prediction of Apple Fruit Firmness and Soluble Solids Content

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Several nondestructive technologies have been developed for assessing the firmness and soluble solids content (SSC) of apples. Each of these technologies has its merits and limitations in predicting these quality parameters. With the concept of multi-sensor data fusion, different sensors would work ...

  4. A cross-layer optimization algorithm for wireless sensor network

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Liu, Le Qing

    2010-07-01

    Energy is critical for typical wireless sensor networks (WSN) and how to energy consumption and maximize network lifetime are big challenges for Wireless sensor networks; cross layer algorithm is main method to solve this problem. In this paper, firstly, we analyze current layer-based optimal methods in wireless sensor network and summarize the physical, link and routing optimization techniques. Secondly we compare some strategies in cross-layer optimization algorithms. According to the analysis and summary of the current lifetime algorithms in wireless sensor network A cross layer optimization algorithm is proposed,. Then this optimization algorithm proposed in the paper is adopted to improve the traditional Leach routing protocol. Simulation results show that this algorithm is an excellent cross layer algorithm for reducing energy consumption.

  5. Active vision and sensor fusion for inspection of metallic surfaces

    NASA Astrophysics Data System (ADS)

    Puente Leon, Fernando; Beyerer, Juergen

    1997-09-01

    This paper deals with strategies for reliably obtaining the edges and the surface texture of metallic objects. Since illumination is a critical aspect regarding robustness and image quality, it is considered here as an active component of the image acquisition system. The performance of the methods presented is demonstrated -- among other examples -- with images of needles for blood sugar tests. Such objects show an optimized form consisting of several planar grinded surfaces delimited by sharp edges. To allow a reliable assessment of the quality of each surface, and a measurement of their edges, methods for fusing data obtained with different illumination constellations were developed. The fusion strategy is based on the minimization of suitable energy functions. First, an illumination-based segmentation of the object is performed. To obtain the boundaries of each surface, directional light-field illumination is used. By formulating suitable criteria, nearly binary images are selected by variation of the illumination direction. Hereafter, the surface edges are obtained by fusing the contours of the areas obtained before. Following, an optimally illuminated image is acquired for each surface of the object by varying the illumination direction. For this purpose, a criterion describing the quality of the surface texture has to be maximized. Finally, the images of all textured surfaces of the object are fused to an improved result, in which the whole object is contained with high contrast. Although the methods presented were designed for inspection of needles, they also perform robustly in other computer vision tasks where metallic objects have to be inspected.

  6. Phase 1 report on sensor technology, data fusion and data interpretation for site characterization

    SciTech Connect

    Beckerman, M.

    1991-10-01

    In this report we discuss sensor technology, data fusion and data interpretation approaches of possible maximal usefulness for subsurface imaging and characterization of land-fill waste sites. Two sensor technologies, terrain conductivity using electromagnetic induction and ground penetrating radar, are described and the literature on the subject is reviewed. We identify the maximum entropy stochastic method as one providing a rigorously justifiable framework for fusing the sensor data, briefly summarize work done by us in this area, and examine some of the outstanding issues with regard to data fusion and interpretation. 25 refs., 17 figs.

  7. Fusion of threshold rules for target detection in wireless sensor networks

    SciTech Connect

    Zhu, Mengxia; Ding, Shi-You; Brooks, Richard R; Wu, Qishi; Rao, Nageswara S

    2010-03-01

    We propose a binary decision fusion rule that reaches a global decision on the presence of a target by integrating local decisions made by multiple sensors. Without requiring a priori probability of target presence, the fusion threshold bounds derived using Chebyshev's inequality ensure a higher hit rate and lower false alarm rate compared to the weighted averages of individual sensors. The Monte Carlo-based simulation results show that the proposed approach significantly improves target detection performance, and can also be used to guide the actual threshold selection in practical sensor network implementation under certain error rate constraints.

  8. Real-time EO/IR sensor fusion on a portable computer and head-mounted display

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Topiwala, Pankaj

    2007-04-01

    Multi-sensor platforms are widely used in surveillance video systems for both military and civilian applications. The complimentary nature of different types of sensors (e.g. EO and IR sensors) makes it possible to observe the scene under almost any condition (day/night/fog/smoke). In this paper, we propose an innovative EO/IR sensor registration and fusion algorithm which runs real-time on a portable computing unit with head-mounted display. The EO/IR sensor suite is mounted on a helmet for a dismounted soldier and the fused scene is shown in the goggle display upon the processing on a portable computing unit. The linear homography transformation between images from the two sensors is precomputed for the mid-to-far scene, which reduces the computational cost for the online calibration of the sensors. The system is implemented in a highly optimized C++ code, with MMX/SSE, and performing a real-time registration. The experimental results on real captured video show the system works very well both in speed and in performance.

  9. Remote Sensing Image Fusion Using Ica and Optimized Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Hnatushenko, V. V.; Vasyliev, V. V.

    2016-06-01

    In remote-sensing image processing, fusion (pan-sharpening) is a process of merging high-resolution panchromatic and lower resolution multispectral (MS) imagery to create a single high-resolution color image. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. However, the pan-sharpening image produced by these methods gets the high color distortion of spectral information. In this paper, to minimize the spectral distortion we propose a remote sensing image fusion method which combines the Independent Component Analysis (ICA) and optimization wavelet transform. The proposed method is based on selection of multiscale components obtained after the ICA of images on the base of their wavelet decomposition and formation of linear forms detailing coefficients of the wavelet decomposition of images brightness distributions by spectral channels with iteratively adjusted weights. These coefficients are determined as a result of solving an optimization problem for the criterion of maximization of information entropy of the synthesized images formed by means of wavelet reconstruction. Further, reconstruction of the images of spectral channels is done by the reverse wavelet transform and formation of the resulting image by superposition of the obtained images. To verify the validity, the new proposed method is compared with several techniques using WorldView-2 satellite data in subjective and objective aspects. In experiments we demonstrated that our scheme provides good spectral quality and efficiency. Spectral and spatial quality metrics in terms of RASE, RMSE, CC, ERGAS and SSIM are used in our experiments. These synthesized MS images differ by showing a better contrast and clarity on the boundaries of the "object of interest - the background". The results show that the proposed approach performs better than some compared methods according to the performance metrics.

  10. Multi-sensor fusion system using wavelet-based detection algorithm applied to physiological monitoring under high-G environment

    NASA Astrophysics Data System (ADS)

    Ryoo, Han Chool

    2000-06-01

    A significant problem in physiological state monitoring systems with single data channels is high rates of false alarm. In order to reduce false alarm probability, several data channels can be integrated to enhance system performance. In this work, we have investigated a sensor fusion methodology applicable to physiological state monitoring, which combines local decisions made from dispersed detectors. Difficulties in biophysical signal processing are associated with nonstationary signal patterns and individual characteristics of human physiology resulting in nonidentical observation statistics. Thus a two compartment design, a modified version of well established fusion theory in communication systems, is presented and applied to biological signal processing where we combine discrete wavelet transforms (DWT) with sensor fusion theory. The signals were decomposed in time-frequency domain by discrete wavelet transform (DWT) to capture localized transient features. Local decisions by wavelet power analysis are followed by global decisions at the data fusion center operating under an optimization criterion, i.e., minimum error criterion (MEC). We used three signals acquired from human volunteers exposed to high-G forces at the human centrifuge/dynamic flight simulator facility in Warminster, PA. The subjects performed anti-G straining maneuvers to protect them from the adverse effects of high-G forces. These maneuvers require muscular tensing and altered breathing patterns. We attempted to determine the subject's state by detecting the presence or absence of the voluntary anti-G straining maneuvers (AGSM). During the exposure to high G force the respiratory patterns, blood pressure and electroencephalogram (EEG) were measured to determine changes in the subject's state. Experimental results show that the probability of false alarm under MEC can be significantly reduced by applying the same rule found at local thresholds to all subjects, and MEC can be employed as a

  11. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  12. LinkMind: link optimization in swarming mobile sensor networks.

    PubMed

    Ngo, Trung Dung

    2011-01-01

    A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation. PMID:22164070

  13. LinkMind: Link Optimization in Swarming Mobile Sensor Networks

    PubMed Central

    Ngo, Trung Dung

    2011-01-01

    A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation. PMID:22164070

  14. Assessing the Performance of Sensor Fusion Methods: Application to Magnetic-Inertial-Based Human Body Tracking.

    PubMed

    Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2016-01-01

    Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor's uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process. PMID:26821027

  15. Optimization of Surface Acoustic Wave-Based Rate Sensors

    PubMed Central

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO3, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  16. Optimization of surface acoustic wave-based rate sensors.

    PubMed

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO₃, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  17. Optimized data fusion for K-means Laplacian clustering

    PubMed Central

    Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A. K.; De Moor, Bart; Moreau, Yves

    2011-01-01

    Motivation: We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Results: Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html. Contact: shiyu@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20980271

  18. Hybrid intelligent control concepts for optimal data fusion

    NASA Astrophysics Data System (ADS)

    Llinas, James

    1994-02-01

    In the post-Cold War era, Naval surface ship operations will be largely conducted in littoral waters to support regional military missions of all types, including humanitarian and evacuation activities, and amphibious mission execution. Under these conditions, surface ships will be much more isolated and vulnerable to a variety of threats, including maneuvering antiship missiles. To deal with these threats, the optimal employment of multiple shipborne sensors for maximum vigilance is paramount. This paper characterizes the sensor management problem as one of intelligent control, identifies some of the key issues in controller design, and presents one approach to controller design which is soon to be implemented and evaluated. It is argued that the complexity and hierarchical nature of problem formulation demands a hybrid combination of knowledge-based methods and scheduling techniques from 'hard' real-time systems theory for its solution.

  19. Optimal geometry for a quartz multipurpose SPM sensor

    PubMed Central

    2013-01-01

    Summary We propose a geometry for a piezoelectric SPM sensor that can be used for combined AFM/LFM/STM. The sensor utilises symmetry to provide a lateral mode without the need to excite torsional modes. The symmetry allows normal and lateral motion to be completely isolated, even when introducing large tips to tune the dynamic properties to optimal values. PMID:23844342

  20. Optimal Magnetic Sensor Vests for Cardiac Source Imaging.

    PubMed

    Lau, Stephan; Petković, Bojana; Haueisen, Jens

    2016-01-01

    Magnetocardiography (MCG) non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics. PMID:27231910

  1. Optimal Magnetic Sensor Vests for Cardiac Source Imaging

    PubMed Central

    Lau, Stephan; Petković, Bojana; Haueisen, Jens

    2016-01-01

    Magnetocardiography (MCG) non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics. PMID:27231910

  2. Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks

    PubMed Central

    Fu, Jun-Song; Liu, Yun

    2015-01-01

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211

  3. Assessing the Performance of Sensor Fusion Methods: Application to Magnetic-Inertial-Based Human Body Tracking

    PubMed Central

    Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2016-01-01

    Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor’s uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process. PMID:26821027

  4. Geometrical optimization of a local ballistic magnetic sensor

    SciTech Connect

    Kanda, Yuhsuke; Hara, Masahiro; Nomura, Tatsuya; Kimura, Takashi

    2014-04-07

    We have developed a highly sensitive local magnetic sensor by using a ballistic transport property in a two-dimensional conductor. A semiclassical simulation reveals that the sensitivity increases when the geometry of the sensor and the spatial distribution of the local field are optimized. We have also experimentally demonstrated a clear observation of a magnetization process in a permalloy dot whose size is much smaller than the size of an optimized ballistic magnetic sensor fabricated from a GaAs/AlGaAs two-dimensional electron gas.

  5. An epidemic model for biological data fusion in ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Chang, K. C.; Kotari, Vikas

    2009-05-01

    Bio terrorism can be a very refined and a catastrophic approach of attacking a nation. This requires the development of a complete architecture dedicatedly designed for this purpose which includes but is not limited to Sensing/Detection, Tracking and Fusion, Communication, and others. In this paper we focus on one such architecture and evaluate its performance. Various sensors for this specific purpose have been studied. The accent has been on use of Distributed systems such as ad-hoc networks and on application of epidemic data fusion algorithms to better manage the bio threat data. The emphasis has been on understanding the performance characteristics of these algorithms under diversified real time scenarios which are implemented through extensive JAVA based simulations. Through comparative studies on communication and fusion the performance of channel filter algorithm for the purpose of biological sensor data fusion are validated.

  6. A novel tiered sensor fusion approach for terrain characterization and safe landing assessment

    NASA Technical Reports Server (NTRS)

    Serrano, Navid; Bajracharya, Max; Howard, Ayanna; Seraji, Homayoun

    2005-01-01

    This paper presents a novel tiered sensor fusion methodology for real-time terrain safety assessment. A combination of active and passive sensors, specifically, radar, lidar, and camera, operate in three tiers according to their inherent ranges of operation. Low-level terrain features (e.g. slope, roughness) and high-level terrain features (e.g. hills, craters) are integrated using principles of reasoning under uncertainty. Three methodologies are used to infer landing safety: Fuzzy Reasoning, Probabilistic Reasoning, and Evidential Reasoning. The safe landing predictions from the three fusion engines are consolidated in a subsequent decision fusion stage aimed at combining the strengths of each fusion methodology. Results from simulated spacecraft descents are presented and discussed.

  7. Reliability estimates for selected sensors in fusion applications

    SciTech Connect

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety.

  8. The Architecture of Information Fusion System Ingreenhouse Wireless Sensor Network Based on Multi-Agent

    NASA Astrophysics Data System (ADS)

    Zhu, Wenting; Chen, Ming

    In view of current unprogressive situation of factory breeding in aquaculture, this article designed a standardized, informationized and intelligentized aquaculture system, proposed a information fusion architecture based on multi-agent in greenhouse wireless sensor network (GWSN), and researched mainly the structural characteristic of the four-classed information fusion based on distributed multi-agent and the method to construct the structure inside of every agent.

  9. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be

  10. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  11. Extended Logic Intelligent Processing System for a Sensor Fusion Processor Hardware

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Thomas, Tyson; Li, Wei-Te; Daud, Taher; Fabunmi, James

    2000-01-01

    The paper presents the hardware implementation and initial tests from a low-power, highspeed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) is described, which combines rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor signals in compact low power VLSI. The development of the ELIPS concept is being done to demonstrate the interceptor functionality which particularly underlines the high speed and low power requirements. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Processing speeds of microseconds have been demonstrated using our test hardware.

  12. Efficient Sensor Placement Optimization Using Gradient Descent and Probabilistic Coverage

    PubMed Central

    Akbarzadeh, Vahab; Lévesque, Julien-Charles; Gagné, Christian; Parizeau, Marc

    2014-01-01

    We are proposing an adaptation of the gradient descent method to optimize the position and orientation of sensors for the sensor placement problem. The novelty of the proposed method lies in the combination of gradient descent optimization with a realistic model, which considers both the topography of the environment and a set of sensors with directional probabilistic sensing. The performance of this approach is compared with two other black box optimization methods over area coverage and processing time. Results show that our proposed method produces competitive results on smaller maps and superior results on larger maps, while requiring much less computation than the other optimization methods to which it has been compared. PMID:25196164

  13. Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.

    PubMed

    Lu, Kelin; Zhou, Rui

    2016-01-01

    A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883

  14. Data fusion on a distributed heterogeneous sensor network.

    SciTech Connect

    Lamborn, Peter; Williams, Pamela J.

    2006-02-01

    Alarm-based sensor systems are being explored as a tool to expand perimeter security for facilities and force protection. However, the collection of increased sensor data has resulted in an insufficient solution that includes faulty data points. Data analysis is needed to reduce nuisance and false alarms, which will improve officials decision making and confidence levels in the system's alarms. Moreover, operational costs can be allayed and losses mitigated if authorities are alerted only when a real threat is detected. In the current system, heuristics such as persistence of alarm and type of sensor that detected an event are used to guide officials responses. We hypothesize that fusing data from heterogeneous sensors in the sensor field can provide more complete situational awareness than looking at individual sensor data. We propose a two stage approach to reduce false alarms. First, we use self organizing maps to cluster sensors based on global positioning coordinates and then train classifiers on the within cluster data to obtain a local view of the event. Next, we train a classifier on the local results to compute a global solution. We investigate the use of machine learning techniques, such as k-nearest neighbor, neural networks, and support vector machines to improve alarm accuracy. On simulated sensor data, the proposed approach identifies false alarms with greater accuracy than a weighted voting algorithm.

  15. Data fusion on a distributed heterogeneous sensor network

    NASA Astrophysics Data System (ADS)

    Lamborn, Peter; Williams, Pamela J.

    2006-04-01

    Alarm-based sensor systems are being explored as a tool to expand perimeter security for facilities and force protection. However, the collection of increased sensor data has resulted in an insufficient solution that includes faulty data points. Data analysis is needed to reduce nuisance and false alarms, which will improve officials' decision making and confidence levels in the system's alarms. Moreover, operational costs can be allayed and losses mitigated if authorities are alerted only when a real threat is detected. In the current system, heuristics such as persistence of alarm and type of sensor that detected an event are used to guide officials' responses. We hypothesize that fusing data from heterogeneous sensors in the sensor field can provide more complete situational awareness than looking at individual sensor data. We propose a two stage approach to reduce false alarms. First, we use self organizing maps to cluster sensors based on global positioning coordinates and then train classifiers on the within cluster data to obtain a local view of the event. Next, we train a classifier on the local results to compute a global solution. We investigate the use of machine learning techniques, such as k-nearest neighbor, neural networks, and support vector machines to improve alarm accuracy. On simulated sensor data, the proposed approach identifies false alarms with greater accuracy than a weighted voting algorithm.

  16. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    NASA Technical Reports Server (NTRS)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  17. Scalable sensor management for automated fusion and tactical reconnaissance

    NASA Astrophysics Data System (ADS)

    Walls, Thomas J.; Wilson, Michael L.; Partridge, Darin C.; Haws, Jonathan R.; Jensen, Mark D.; Johnson, Troy R.; Petersen, Brad D.; Sullivan, Stephanie W.

    2013-05-01

    The capabilities of tactical intelligence, surveillance, and reconnaissance (ISR) payloads are expanding from single sensor imagers to integrated systems-of-systems architectures. Increasingly, these systems-of-systems include multiple sensing modalities that can act as force multipliers for the intelligence analyst. Currently, the separate sensing modalities operate largely independent of one another, providing a selection of operating modes but not an integrated intelligence product. We describe here a Sensor Management System (SMS) designed to provide a small, compact processing unit capable of managing multiple collaborative sensor systems on-board an aircraft. Its purpose is to increase sensor cooperation and collaboration to achieve intelligent data collection and exploitation. The SMS architecture is designed to be largely sensor and data agnostic and provide flexible networked access for both data providers and data consumers. It supports pre-planned and ad-hoc missions, with provisions for on-demand tasking and updates from users connected via data links. Management of sensors and user agents takes place over standard network protocols such that any number and combination of sensors and user agents, either on the local network or connected via data link, can register with the SMS at any time during the mission. The SMS provides control over sensor data collection to handle logging and routing of data products to subscribing user agents. It also supports the addition of algorithmic data processing agents for feature/target extraction and provides for subsequent cueing from one sensor to another. The SMS architecture was designed to scale from a small UAV carrying a limited number of payloads to an aircraft carrying a large number of payloads. The SMS system is STANAG 4575 compliant as a removable memory module (RMM) and can act as a vehicle specific module (VSM) to provide STANAG 4586 compliance (level-3 interoperability) to a non-compliant sensor system

  18. Field of view selection for optimal airborne imaging sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Barnard, P. Werner; Fildis, Halidun; Erbudak, Mustafa; Senger, Tolga; Alpman, Mehmet E.

    2014-05-01

    The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor's sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system's performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.

  19. Optimal flow sensor placement on wastewater treatment plants.

    PubMed

    Villez, Kris; Vanrolleghem, Peter A; Corominas, Lluís

    2016-09-15

    Obtaining high quality data collected on wastewater treatment plants is gaining increasing attention in the wastewater engineering literature. Typical studies focus on recognition of faulty data with a given set of installed sensors on a wastewater treatment plant. Little attention is however given to how one can install sensors in such a way that fault detection and identification can be improved. In this work, we develop a method to obtain Pareto optimal sensor layouts in terms of cost, observability, and redundancy. Most importantly, the resulting method allows reducing the large set of possibilities to a minimal set of sensor layouts efficiently for any wastewater treatment plant on the basis of structural criteria only, with limited sensor information, and without prior data collection. In addition, the developed optimization scheme is fast. Practically important is that the number of sensors needed for both observability of all flows and redundancy of all flow sensors is only one more compared to the number of sensors needed for observability of all flows in the studied wastewater treatment plant configurations. PMID:27258618

  20. Optimal weighted suprathreshold stochastic resonance with multigroup saturating sensors

    NASA Astrophysics Data System (ADS)

    Xu, Liyan; Duan, Fabing; Abbott, Derek; McDonnell, Mark D.

    2016-09-01

    Suprathreshold stochastic resonance (SSR) describes a noise-enhanced effect that occurs, not in a single element, but rather in an array of nonlinear elements when the signal is no longer subthreshold. Within the context of SSR, we investigate the optimization problem of signal recovery through an array of saturating sensors where the response of each element can be optimally weighted prior to summation, with a performance measure of mean square error (MSE). We consider groups of sensors. Individual sensors within each group have identical parameters, but each group has distinct parameters. We find that optimally weighting the sensor responses provides a lower MSE in comparison with the unweighted case for weak and moderate noise intensities. Moreover, as the slope parameter of the nonlinear sensors increases, the MSE superiority of the optimally weighted array shows a peak, and then tends to a fixed value. These results indicate that SSR with optimal weights, as a general mechanism of enhancement by noise, is of potential interest to signal recovery.

  1. Particle swarm optimization for the clustering of wireless sensors

    NASA Astrophysics Data System (ADS)

    Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.

    2003-07-01

    Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.

  2. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device

    PubMed Central

    He, Xiang; Aloi, Daniel N.; Li, Jia

    2015-01-01

    Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design. PMID:26694387

  3. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device.

    PubMed

    He, Xiang; Aloi, Daniel N; Li, Jia

    2015-01-01

    Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design. PMID:26694387

  4. Computer vision and sensor fusion for detecting buried objects

    SciTech Connect

    Clark, G.A.; Hernandez, J.E.; Sengupta, S.K.; Sherwood, R.J.; Schaich, P.C.; Buhl, M.R.; Kane, R.J.; DelGrande, N.K.

    1992-10-01

    Given multiple images of the surface of the earth from dual-band infrared sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. Supervised learning pattern classifiers (including neural networks,) are used. We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing information from multiple sensor types. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved problem of detecting buried land mines from an airborne standoff platform.

  5. Neural Network Model For Fusion Of Visible And Infrared Sensor Outputs

    NASA Astrophysics Data System (ADS)

    Ajjimarangsee, Pongsak; Huntsberger, Terrance L.

    1989-01-01

    Integration of outputs from multiple sensors has been the subject of much of the recent research in the machine vision field. This process is useful in a variety of applications, such as three dimensional interpretation of scenes imaged by multiple cameras, integration of visible and range data, and the fusion of multiple types of sensors. The use of multiple types of sensors for machine vision poses the problem of how to integrate the information from these sensors. This paper presents a neural network model for the fusion of visible and thermal infrared sensor outputs. Since there is no human biological system that can be used as a model for integration of these sensor outputs, alternate biological systems for sensory fusions can serve as starting points. In this paper, a model is developed based upon six types of bimodal neurons found in the optic tectum of the rattlesnake. These neurons integrate visible and thermal infrared sensory inputs. The neural network model has a series of layers which include a layer for unsupervised clustering in the form of self-organizing feature maps, followed by a layer which has multiple filters that are generated by training a neural net with experimental rattlesnake response data. The final layer performs another unsupervised clustering for integration of the output from the filter layer. The results of a number of experiments are also presented.

  6. Towards an optimal fusion of SMOS and Aquarius SSS data

    NASA Astrophysics Data System (ADS)

    Guimbard, Sebastien; Umbert, Marta; Turiel, Antonio; Portabella, Marcos

    2014-05-01

    straightforwardly merged. However, this is not true since SMOS and Aquarius SSS retrieval algorithms differ and such differences lead to non-negligible differences in the derived SSS maps. This can be shown by simply analyzing the differences between the different products (i.e., different SSS retrieval algorithms) available for each mission separately. In this work, a thorough assessment of the impact of using different auxiliary data (e.g., sea surface winds: ECMWF, NCEP, Aquarius scatterometer; sea surface temperature: Reynolds, OSTIA), different forward models (galactic, dielectric constant, and roughness models), and different retrieval approaches (multiparametric Bayesian inversion, direct retrievals by forward propagation to TB corrections for TEC, galaxy, and roughness) on the final SSS maps is carried out. This analysis sets the grounds for an optimal fusion of SMOS and Aquarius SSS data.

  7. Neural network implementations of data association algorithms for sensor fusion

    NASA Technical Reports Server (NTRS)

    Brown, Donald E.; Pittard, Clarence L.; Martin, Worthy N.

    1989-01-01

    The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.

  8. Towards an Optimal Energy Consumption for Unattended Mobile Sensor Networks through Autonomous Sensor Redeployment

    PubMed Central

    Jia, Jie; Wen, Yingyou; Zhao, Dazhe

    2014-01-01

    Energy hole is an inherent problem caused by heavier traffic loads of sensor nodes nearer the sink because of more frequent data transmission, which is strongly dependent on the topology induced by the sensor deployment. In this paper, we propose an autonomous sensor redeployment algorithm to balance energy consumption and mitigate energy hole for unattended mobile sensor networks. First, with the target area divided into several equal width coronas, we present a mathematical problem modeling sensor node layout as well as transmission pattern to maximize network coverage and reduce communication cost. And then, by calculating the optimal node density for each corona to avoid energy hole, a fully distributed movement algorithm is proposed, which can achieve an optimal distribution quickly only by pushing or pulling its one-hop neighbors. The simulation results demonstrate that our algorithm achieves a much smaller average moving distance and a much longer network lifetime than existing algorithms and can eliminate the energy hole problem effectively. PMID:24949494

  9. Towards an optimal energy consumption for unattended mobile sensor networks through autonomous sensor redeployment.

    PubMed

    Chen, Jian; Jia, Jie; Wen, Yingyou; Zhao, Dazhe

    2014-01-01

    Energy hole is an inherent problem caused by heavier traffic loads of sensor nodes nearer the sink because of more frequent data transmission, which is strongly dependent on the topology induced by the sensor deployment. In this paper, we propose an autonomous sensor redeployment algorithm to balance energy consumption and mitigate energy hole for unattended mobile sensor networks. First, with the target area divided into several equal width coronas, we present a mathematical problem modeling sensor node layout as well as transmission pattern to maximize network coverage and reduce communication cost. And then, by calculating the optimal node density for each corona to avoid energy hole, a fully distributed movement algorithm is proposed, which can achieve an optimal distribution quickly only by pushing or pulling its one-hop neighbors. The simulation results demonstrate that our algorithm achieves a much smaller average moving distance and a much longer network lifetime than existing algorithms and can eliminate the energy hole problem effectively. PMID:24949494

  10. A review of sensor data fusion for explosives and weapons detection

    NASA Astrophysics Data System (ADS)

    Kemp, Michael C.

    2013-05-01

    The combination or fusion of data from multiple complementary sensors can potentially improve system performance in many explosives and weapons detection applications. The motivations for fusion can include improved probability of detection; reduced false alarms; detection of an increased range of threats; higher throughput and better resilience to adversary countermeasures. This paper presents the conclusions of a study which surveyed a wide range of data fusion techniques and examples of the research, development and practical use of fusion in explosives detection. Different applications types such as aviation checkpoint, checked baggage and stand-off detection are compared and contrasted, and the degree to which sensors can be regarded as `orthogonal' is explored. Whilst data fusion is frequently cited as an opportunity, there are fewer examples of its operational deployment. Blockers to the wider use of data fusion include the difficulty of predicting the performance gains that are likely to be achieved in practice, as well as a number of cost, commercial, integration, test and evaluation issues. The paper makes a number of recommendations for future research work.

  11. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  12. Steam distribution and energy delivery optimization using wireless sensors

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Allgood, Glenn O.; Kuruganti, Teja P.; Sukumar, Sreenivas R.; Djouadi, Seddik M.; Lake, Joe E.

    2011-05-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  13. Steam distribution and energy delivery optimization using wireless sensors

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Djouadi, Seddik M; Lake, Joe E

    2011-01-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  14. Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas

    NASA Technical Reports Server (NTRS)

    Young, D. T.

    1993-01-01

    The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.

  15. Recent results and challenges in development of metallic Hall sensors for fusion reactors

    SciTech Connect

    Ďuran, Ivan; Mušálek, Radek; Kovařík, Karel; Sentkerestiová, Jana; Kohout, Michal

    2014-08-21

    Reliable and precise diagnostic of local magnetic field is crucial for successful operation of future thermonuclear fusion reactors based on magnetic confinement. Magnetic sensors at these devices will experience an extremely demanding operational environment with large radiation and thermal loads in combination with required long term, reliable, and service-free performance. Neither present day commercial nor laboratory measurement systems comply with these requirements. Metallic Hall sensors based on e.g. copper or bismuth could potentially satisfy these needs. We present the technology for manufacturing of such sensors and some initial results on characterization of their properties.

  16. Embedded Relative Navigation Sensor Fusion Algorithms for Autonomous Rendezvous and Docking Missions

    NASA Technical Reports Server (NTRS)

    DeKock, Brandon K.; Betts, Kevin M.; McDuffie, James H.; Dreas, Christine B.

    2008-01-01

    bd Systems (a subsidiary of SAIC) has developed a suite of embedded relative navigation sensor fusion algorithms to enable NASA autonomous rendezvous and docking (AR&D) missions. Translational and rotational Extended Kalman Filters (EKFs) were developed for integrating measurements based on the vehicles' orbital mechanics and high-fidelity sensor error models and provide a solution with increased accuracy and robustness relative to any single relative navigation sensor. The filters were tested tinough stand-alone covariance analysis, closed-loop testing with a high-fidelity multi-body orbital simulation, and hardware-in-the-loop (HWIL) testing in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL).

  17. Optimizing Sensor and Actuator Arrays for ASAC Noise Control

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran

    2000-01-01

    This paper summarizes the development of an approach to optimizing the locations for arrays of sensors and actuators in active noise control systems. A type of directed combinatorial search, called Tabu Search, is used to select an optimal configuration from a much larger set of candidate locations. The benefit of using an optimized set is demonstrated. The importance of limiting actuator forces to realistic levels when evaluating the cost function is discussed. Results of flight testing an optimized system are presented. Although the technique has been applied primarily to Active Structural Acoustic Control systems, it can be adapted for use in other active noise control implementations.

  18. Visual sensor fusion for active security in robotic industrial environments

    NASA Astrophysics Data System (ADS)

    Robla, Sandra; Llata, Jose R.; Torre-Ferrero, Carlos; Sarabia, Esther G.; Becerra, Victor; Perez-Oria, Juan

    2014-12-01

    This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.

  19. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    PubMed Central

    Li, Jiaming; Luo, Suhuai; Jin, Jesse S.

    2010-01-01

    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent. PMID:22163414

  20. Naval target classification by fusion of IR and EO sensors

    NASA Astrophysics Data System (ADS)

    Giompapa, S.; Croci, R.; Di Stefano, R.; Farina, A.; Gini, F.; Graziano, A.; Lapierre, F.

    2007-10-01

    This paper describes the classification function of naval targets performed by an infrared camera (IR) and an electro-optical camera (EO) that operate in a more complex multisensor system for the surveillance of a coastal region. The following naval targets are considered: high speed dinghy, motor boat, fishing boat, oil tanker. Target classification is automatically performed by exploiting the knowledge of the sensor confusion matrix (CM). The CM is analytically computed as a function of the sensor noise features, the sensor resolution, and the dimension of the involved image database. For both the sensors, a database of images is generated exploiting a three-dimensional (3D) Computer Aided Design (CAD) of the target, for the four types of ship mentioned above. For the EO camera, the image generation is simply obtained by the projection of the 3D CAD on the camera focal plane. For the IR images simulation, firstly the surface temperatures are computed using an Open-source Software for Modelling and Simulation of Infrared Signatures (OSMOSIS) that efficiently integrates the dependence of the emissivity upon the surface temperature, the wavelength, and the elevation angle. The software is applicable to realistic ship geometries. Secondly, these temperatures and the environment features are used to predict realistic IR images. The local decisions on the class are made using the elements of the confusion matrix of each sensor and they are fused according to a maximum likelihood (ML) rule. The global performance of the classification process is measured in terms of the global confusion matrix of the integrated system. This analytical approach can effectively reduce the computational load of a Monte Carlo simulation, when the sensors described here are introduced in a more complex multisensor system for the maritime surveillance.

  1. Sensor fusion for assured vision in space applications

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Krishen, Kumar

    1993-01-01

    By using emittance and reflectance radiation models, the effects of angle of observation, polarization, and spectral content are analyzed to characterize the geometrical and physical properties--reflectivity, emissivity, orientation, dielectric properties, and roughness--of a sensed surface. Based on this analysis, the use of microwave, infrared, and optical sensing is investigated to assure the perception of surfaces on a typical lunar outpost. Also, the concept of employing several sensors on a lunar outpost is explored. An approach for efficient hardware implementation of the fused sensor systems is discussed.

  2. All-IP-Ethernet architecture for real-time sensor-fusion processing

    NASA Astrophysics Data System (ADS)

    Hiraki, Kei; Inaba, Mary; Tezuka, Hiroshi; Tomari, Hisanobu; Koizumi, Kenichi; Kondo, Shuya

    2016-03-01

    Serendipter is a device that distinguishes and selects very rare particles and cells from huge amount of population. We are currently designing and constructing information processing system for a Serendipter. The information processing system for Serendipter is a kind of sensor-fusion system but with much more difficulties: To fulfill these requirements, we adopt All IP based architecture: All IP-Ethernet based data processing system consists of (1) sensor/detector directly output data as IP-Ethernet packet stream, (2) single Ethernet/TCP/IP streams by a L2 100Gbps Ethernet switch, (3) An FPGA board with 100Gbps Ethernet I/F connected to the switch and a Xeon based server. Circuits in the FPGA include 100Gbps Ethernet MAC, buffers and preprocessing, and real-time Deep learning circuits using multi-layer neural networks. Proposed All-IP architecture solves existing problem to construct large-scale sensor-fusion systems.

  3. Summary of sensor evaluation for the Fusion ELectromagnetic Induction eXperiment (FELIX)

    SciTech Connect

    Knott, M.J.

    1982-08-01

    As part of the First Wall/Blanket/Shield Engineering Test Program, a test bed called FELIX (Fusion ELectromagnetic Induction eXperiment) is now under construction at ANL. Its purpose will be to test, evaluate, and develop computer codes for the prediction of electromagnetically induced phenomenon in a magnetic environment modeling that of a fusion reaction. Crucial to this process is the sensing and recording of the various induced effects. Sensor evaluation for FELIX has reached the point where most sensor types have been evaluated and preliminary decisions are being made as to type and quantity for the initial FELIX experiments. These early experiments, the first, flat plate experiment in particular, will be aimed at testing the sensors as well as the pertinent theories involved. The reason for these evaluations, decisions, and proof tests is the harsh electrical and magnetic environment that FELIX presents.

  4. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    PubMed

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-01-01

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715

  5. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    NASA Technical Reports Server (NTRS)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1992-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES, AVHRR, and SSM/I sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  6. Monitoring soil health with a sensor fusion approach

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Sensor-based approaches to assessment and quantification of soil health are important to facilitate cost-effective, site-specific soil management. While traditional laboratory analysis is effective for assessing soil health (or soil quality) at a few sites, such an approach quickly becomes infeasibl...

  7. Fusion of Smartphone Motion Sensors for Physical Activity Recognition

    PubMed Central

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J. M.

    2014-01-01

    For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible. PMID:24919015

  8. Statistical methods of combining information: Applications to sensor data fusion

    SciTech Connect

    Burr, T.

    1996-12-31

    This paper reviews some statistical approaches to combining information from multiple sources. Promising new approaches will be described, and potential applications to combining not-so-different data sources such as sensor data will be discussed. Experiences with one real data set are described.

  9. Multi-Sensor Data Fusion Using a Relevance Vector Machine Based on an Ant Colony for Gearbox Fault Detection.

    PubMed

    Liu, Zhiwen; Guo, Wei; Tang, Zhangchun; Chen, Yongqiang

    2015-01-01

    Sensors play an important role in the modern manufacturing and industrial processes. Their reliability is vital to ensure reliable and accurate information for condition based maintenance. For the gearbox, the critical machine component in the rotating machinery, the vibration signals collected by sensors are usually noisy. At the same time, the fault detection results based on the vibration signals from a single sensor may be unreliable and unstable. To solve this problem, this paper proposes an intelligent multi-sensor data fusion method using the relevance vector machine (RVM) based on an ant colony optimization algorithm (ACO-RVM) for gearboxes' fault detection. RVM is a sparse probability model based on support vector machine (SVM). RVM not only has higher detection accuracy, but also better real-time accuracy compared with SVM. The ACO algorithm is used to determine kernel parameters of RVM. Moreover, the ensemble empirical mode decomposition (EEMD) is applied to preprocess the raw vibration signals to eliminate the influence caused by noise and other unrelated signals. The distance evaluation technique (DET) is employed to select dominant features as input of the ACO-RVM, so that the redundancy and inference in a large amount of features can be removed. Two gearboxes are used to demonstrate the performance of the proposed method. The experimental results show that the ACO-RVM has higher fault detection accuracy than the RVM with normal the cross-validation (CV). PMID:26334280

  10. Multi-Sensor Data Fusion Using a Relevance Vector Machine Based on an Ant Colony for Gearbox Fault Detection

    PubMed Central

    Liu, Zhiwen; Guo, Wei; Tang, Zhangchun; Chen, Yongqiang

    2015-01-01

    Sensors play an important role in the modern manufacturing and industrial processes. Their reliability is vital to ensure reliable and accurate information for condition based maintenance. For the gearbox, the critical machine component in the rotating machinery, the vibration signals collected by sensors are usually noisy. At the same time, the fault detection results based on the vibration signals from a single sensor may be unreliable and unstable. To solve this problem, this paper proposes an intelligent multi-sensor data fusion method using the relevance vector machine (RVM) based on an ant colony optimization algorithm (ACO-RVM) for gearboxes’ fault detection. RVM is a sparse probability model based on support vector machine (SVM). RVM not only has higher detection accuracy, but also better real-time accuracy compared with SVM. The ACO algorithm is used to determine kernel parameters of RVM. Moreover, the ensemble empirical mode decomposition (EEMD) is applied to preprocess the raw vibration signals to eliminate the influence caused by noise and other unrelated signals. The distance evaluation technique (DET) is employed to select dominant features as input of the ACO-RVM, so that the redundancy and inference in a large amount of features can be removed. Two gearboxes are used to demonstrate the performance of the proposed method. The experimental results show that the ACO-RVM has higher fault detection accuracy than the RVM with normal the cross-validation (CV). PMID:26334280

  11. Report of the Fusion Energy Sciences Advisory Committee. Panel on Integrated Simulation and Optimization of Magnetic Fusion Systems

    SciTech Connect

    Dahlburg, Jill; Corones, James; Batchelor, Donald; Bramley, Randall; Greenwald, Martin; Jardin, Stephen; Krasheninnikov, Sergei; Laub, Alan; Leboeuf, Jean-Noel; Lindl, John; Lokke, William; Rosenbluth, Marshall; Ross, David; Schnack, Dalton

    2002-11-01

    Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individual features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC

  12. Optimization of the silicon sensors for the CMS tracker

    NASA Astrophysics Data System (ADS)

    Albergo, S.; Angarano, M.; Azzi, P.; Babucci, E.; Bacchetta, N.; Bader, A.; Bagliesi, G.; Basti, A.; Biggeri, U.; Biino, C.; Bilei, G. M.; Bisello, D.; Boemi, D.; Bosi, F.; Borello, L.; Braibant, S.; Breuker, H.; Brunetti, M. T.; Bruzzi, M.; Buffini, A.; Busoni, S.; Candelori, A.; Caner, A.; Castaldi, R.; Castro, A.; Catacchini, E.; Checcucci, B.; Ciampolini, P.; Civinini, C.; Costa, M.; Creanza, D.; D'Alessandro, R.; DeMaria, N.; de Palma, M.; Dell'Orso, R.; Dutta, S.; Favro, G.; Fiore, L.; Focardi, E.; French, M.; Freudenreich, K.; Frey, A.; Friedl, M.; Fürtjes, A.; Giassi, A.; Giorgi, M.; Giraldo, A.; Glessing, W.; Gu, W. H.; Hall, G.; Hammarstrom, R.; Hebbeker, T.; Honkanen, A.; Honma, A.; Hrubec, J.; Huhtinen, M.; Kaminsky, A.; Karimaki, V.; Koenig, St.; Krammer, M.; Lariccia, P.; Lenzi, M.; Loreti, M.; Luebelsmeyer, K.; Lustermann, W.; Mättig, P.; Maggi, G.; Mannelli, M.; Mantovani, G.; Marchioro, A.; Mariotti, C.; Martignon, G.; Mc Evoy, B.; Meschini, M.; Messineo, A.; Migliore, E.; My, S.; Neviani, A.; Paccagnella, A.; Palla, F.; Pandoulas, D.; Papi, A.; Parrini, G.; Passeri, D.; Pernicka, M.; Pieri, M.; Piperov, S.; Potenza, R.; Radicci, V.; Raffaelli, F.; Raymond, M.; Rizzo, F.; Santocchia, A.; Segneri, G.; Selvaggi, G.; Servoli, L.; Sguazzoni, G.; Siedling, R.; Silvestris, L.; Starodumov, A.; Stavitski, I.; Surrow, B.; Tempesta, P.; Tonelli, G.; Tricomi, A.; Tuominiemi, J.; Tuuva, T.; Verdini, P. G.; Viertel, G.; Xie, Z.; Yahong, Li; Watts, S.; Wittmer, B.

    2001-07-01

    The CMS experiment at the LHC will comprise a large silicon strip tracker. This article highlights some of the results obtained in the R&D studies for the optimization of its silicon sensors. Measurements of the capacitances and of the high voltage stability of the devices are presented before and after irradiation to the dose expected after the full lifetime of the tracker.

  13. Multi-sensor data fusion for measurement of complex freeform surfaces

    NASA Astrophysics Data System (ADS)

    Ren, M. J.; Liu, M. Y.; Cheung, C. F.; Yin, Y. H.

    2016-01-01

    Along with the rapid development of the science and technology in fields such as space optics, multi-scale enriched freeform surfaces are widely used to enhance the performance of the optical systems in both functionality and size reduction. Multi-sensor technology is considered as one of the promising methods to measure and characterize these surfaces at multiple scales. This paper presents a multi-sensor data fusion based measurement method to purposely extract the geometric information of the components with different scales which is used to establish a holistic geometry of the surface via data fusion. To address the key problems of multi-sensor data fusion, an intrinsic feature pattern based surface registration method is developed to transform the measured datasets to a common coordinate frame. Gaussian zero-order regression filter is then used to separate each measured data in different scales, and the datasets are fused based on an edge intensity data fusion algorithm within the same wavelength. The fused data at different scales is then merged to form a new surface with holistic multiscale information. Experimental study is presented to verify the effectiveness of the proposed method.

  14. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-01-01

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented. PMID:26703603

  15. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators

    PubMed Central

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-01-01

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented. PMID:26703603

  16. Three-band MRI image fusion utilizing the wavelet-based method optimized with two quantitative fusion metrics

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Elmaghraby, Adel S.; Frigui, Hichem

    2006-03-01

    In magnetic resonance imaging (MRI), there are three bands of images ("MRI triplet") available, which are T1-, T2- and PD-weighted images. The three images of a MRI triplet provide complementary structure information and therefore it is useful for diagnosis and subsequent analysis to combine three-band images into one. We propose an advanced discrete wavelet transform (αDWT) for three-band MRI image fusion and the αDWT algorithm is further optimized utilizing two quantitative fusion metrics - the image quality index (IQI) and ratio spatial frequency error (rSFe). In the aDWT method, principle component analysis (PCA) and morphological processing are incorporated into a regular DWT fusion algorithm. Furthermore, the αDWT has two adjustable parameters - the level of DWT decomposition (L d) and the length of the selected wavelet (L w) that determinately affect the fusion result. The fused image quality can be quantitatively measured with the established metrics - IQI and rSFe. Varying the control parameters (L d and L w), an iterative fusion procedure can be implemented and running until an optimized fusion is achieved. We fused and analyzed several MRI triplets from the Visible Human Project ® female dataset. From the quantitative and qualitative evaluations of fused images, we found that (1) the αDWTi-IQI algorithm produces a smoothed image whereas the αDWTi-rSFe algorithm yields a sharpened image, (2) fused image "T1+T2" is the most informative one in comparison with other two-in-one fusions (PD+T1 and PD+T2), (3) for three-in-one fusions, no significant difference is observed among the three fusions of (PD+T1)+T2, (PD+T2)+T1 and (T1+T2)+PD, thus the order of fusion does not play an important role. The fused images can significantly benefit medical diagnosis and also the further image processing such as multi-modality image fusion (with CT images), visualization (colorization), segmentation, classification and computer-aided diagnosis (CAD).

  17. Multi-sensor fusion, communications and information warfare

    NASA Astrophysics Data System (ADS)

    Schutzer, D.

    1983-12-01

    Todays fusion problems are chiefly concerned with organizational and procedural issues. The technology they employ is mostly available state-of-the-art. The future brings a new set of concerns centered about issues that are more technical in nature. Future military command and control and weapons systems will likely be more distributed, more automated and smarter. They will probably include an advanced form of information warfare where sensing, information exchange, jamming, deception, and misinformation will be capable of being managed and orchestrated from a total mission objective perspective. As a result, the future fusion process will be required to handle and process on orders of magnitude increase in the volume and diversity of input data, faster. It will need to produce a great variety of information to feed automated C2 and weapons systems data bases through more interactive and responsive interfaces than exist today. At the same time it needs to analyze this data at a deeper level of understanding than ever before, scrutinizing and drawing inferences and conclusions about ones adversaries underlying beliefs, readiness, intentions and future actions from what is often times a suspect and spotty data base. Finally, these conclusions and inferences need to be presented in a clear, concise, honest, but convincing and timely manner. This paper presents a unified framework from which the necessary information may be fused, managed and presented to support command in such a future information warfare environment and discusses the associated technical challenges. This paper reviews various ongoing research programs that are addressing these challenges.

  18. Inertial and optical sensor fusion to compensate for partial occlusions in surgical tracking systems

    NASA Astrophysics Data System (ADS)

    He, Changyu; Liu, Yue

    2015-08-01

    To solve the occlusion problem in optical tracking system (OTS) for surgical navigation, this paper proposes a sensor fusion approach and an adaptive display method to handle cases where partial or total occlusion occurs. In the sensor fusion approach, the full 6D pose information provided by the optical tracker is used to estimate the bias of the inertial sensors when all of the markers are visible. When partial occlusion occurs, the optical system can track the position of at least one marker which can be combined with the orientation estimated from the inertial measurements to recover the full 6D pose information. When all the markers are invisible, the position tracking will be realized based on outputs of the Inertial Measurement Unit (IMU) which may generate increasing drifting error. To alert the user when the drifting error is great enough to influence the navigation, the images adaptive to the drifting error are displayed in the field of the user's view. The experiments are performed with an augmented reality HMD which displays the AR images and the hybrid tracking system (HTS) which consists of an OTS and an IMU. Experimental result shows that with proposed sensor fusion approach the 6D pose of the head with respect to the reference frame can be estimated even under partial occlusion conditions. With the help of the proposed adaptive display method, the users can recover the scene of markers when the error is considered to be relatively high.

  19. Beyond third generation: a sensor-fusion targeting FLIR pod for the F/A-18

    NASA Astrophysics Data System (ADS)

    Krebs, William K.; Scribner, Dean A.; Miller, Geoffrey M.; Ogawa, James S.; Schuler, Jonathon M.

    1998-03-01

    The Navy and Marine Corps F/A-18 pilots state that the targeting FLIR system does not provide enough target definition and clarity. As a result, high altitude tactics missions are the most difficult due to the limited amount of time available to identify the target. If the targeting FLIR system had a better stand-off range and an improved target contrast then the pilots' task would be easier. Unfortunately, the replacement cost of the existing FLIR equipment is prohibitive. The purpose of this study is to modify the existing F/A-18 targeting FLIR system with a dual-band color sensor to improve target contrast and stand- off ranges. Methods: A non-real-time color sensor fusion system was flown on a NASA F/A-18 in a NITE Hawk targeting FLIR pod. Flight videotape was recorded from a third generation image intensified CCD and a first generation long-wave infrared sensor. A standard visual search task was used to assess whether pilots' situational awareness was improved by combining the two sensor videotape sequences into a single fused color or grayscale representation. Results: Fleet aviators showed that color fusion improved target detection, but hindered situational awareness. Aviators reported the lack of color constancy caused the scene to be unaesthetically pleasing; however, target detection was enhanced. Conclusion: A color fusion scene may benefit targeting applications but hinder situational awareness.

  20. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  1. Optimal Sensor Layouts in Underwater Locomotory Systems

    NASA Astrophysics Data System (ADS)

    Colvert, Brendan; Kanso, Eva

    2015-11-01

    Retrieving and understanding global flow characteristics from local sensory measurements is a challenging but extremely relevant problem in fields such as defense, robotics, and biomimetics. It is an inverse problem in that the goal is to translate local information into global flow properties. In this talk we present techniques for optimization of sensory layouts within the context of an idealized underwater locomotory system. Using techniques from fluid mechanics and control theory, we show that, under certain conditions, local measurements can inform the submerged body about its orientation relative to the ambient flow, and allow it to recognize local properties of shear flows. We conclude by commenting on the relevance of these findings to underwater navigation in engineered systems and live organisms.

  2. A sensor fusion field experiment in forest ecosystem dynamics

    NASA Technical Reports Server (NTRS)

    Smith, James A.; Ranson, K. Jon; Williams, Darrel L.; Levine, Elissa R.; Goltz, Stewart M.

    1990-01-01

    The background of the Forest Ecosystem Dynamics field campaign is presented, a progress report on the analysis of the collected data and related modeling activities is provided, and plans for future experiments at different points in the phenological cycle are outlined. The ecological overview of the study site is presented, and attention is focused on forest stands, needles, and atmospheric measurements. Sensor deployment and thermal and microwave observations are discussed, along with two examples of the optical radiation measurements obtained during the experiment in support of radiative transfer modeling. Future activities pertaining to an archival system, synthetic aperture radar, carbon acquisition modeling, and upcoming field experiments are considered.

  3. Geometry optimization for micro-pressure sensor considering dynamic interference.

    PubMed

    Yu, Zhongliang; Zhao, Yulong; Li, Lili; Tian, Bian; Li, Cun

    2014-09-01

    Presented is the geometry optimization for piezoresistive absolute micro-pressure sensor. A figure of merit called the performance factor (PF) is defined as a quantitative index to describe the comprehensive performances of a sensor including sensitivity, resonant frequency, and acceleration interference. Three geometries are proposed through introducing islands and sensitive beams into typical flat diaphragm. The stress distributions of sensitive elements are analyzed by finite element method. Multivariate fittings based on ANSYS simulation results are performed to establish the equations about surface stress, deflection, and resonant frequency. Optimization by MATLAB is carried out to determine the dimensions of the geometries. Convex corner undercutting is evaluated. Each PF of the three geometries with the determined dimensions is calculated and compared. Silicon bulk micromachining is utilized to fabricate the prototypes of the sensors. The outputs of the sensors under both static and dynamic conditions are tested. Experimental results demonstrate the rationality of the defined performance factor and reveal that the geometry with quad islands presents the highest PF of 210.947 Hz(1/4). The favorable overall performances enable the sensor more suitable for altimetry. PMID:25273764

  4. Optimizing sensor packaging costs and performances in environmental applications

    NASA Astrophysics Data System (ADS)

    Gandelli, Alessandro; Grimaccia, Francesco; Zich, Riccardo E.

    2005-02-01

    Sensor packaging has been identified as one of the most significant areas of research for enabling sensor usage in harsh environments for several application fields. Protection is one of the primary goals of sensor packaging; however, research deals not only with robust and resistant packages optimization, but also with electromagnetic performance. On the other hand, from the economic point of view, wireless sensor networks present hundreds of thousands of small sensors, namely motes, whose costs should be reduced at the lowest level, thus driving low the packaging cost also. So far, packaging issues have not been extended to such topics because these products are not yet in the advanced production cycle. However, in order to guarantee high EMC performance and low packaging costs, it is necessary to address the packaging strategy from the very beginning. Technological improvements that impacts on production time and costs can be suitable organized by anticipating the above mentioned issues in the development and design of the motes, obtaining in this way a significant reduction of final efforts for optimization. The paper addresses the development and production techniques necessary to identify the real needs in such a field and provides the suitable strategies to enhance industrial performance of high-volumes productions. Moreover the electrical and mechanical characteristics of these devices are reviewed and better identified in function of the environmental requirements and electromagnetic compatibility. Future developments complete the scenario and introduce the next mote generation characterized by a cost lower by an order of magnitude.

  5. Geometry optimization for micro-pressure sensor considering dynamic interference

    NASA Astrophysics Data System (ADS)

    Yu, Zhongliang; Zhao, Yulong; Li, Lili; Tian, Bian; Li, Cun

    2014-09-01

    Presented is the geometry optimization for piezoresistive absolute micro-pressure sensor. A figure of merit called the performance factor (PF) is defined as a quantitative index to describe the comprehensive performances of a sensor including sensitivity, resonant frequency, and acceleration interference. Three geometries are proposed through introducing islands and sensitive beams into typical flat diaphragm. The stress distributions of sensitive elements are analyzed by finite element method. Multivariate fittings based on ANSYS simulation results are performed to establish the equations about surface stress, deflection, and resonant frequency. Optimization by MATLAB is carried out to determine the dimensions of the geometries. Convex corner undercutting is evaluated. Each PF of the three geometries with the determined dimensions is calculated and compared. Silicon bulk micromachining is utilized to fabricate the prototypes of the sensors. The outputs of the sensors under both static and dynamic conditions are tested. Experimental results demonstrate the rationality of the defined performance factor and reveal that the geometry with quad islands presents the highest PF of 210.947 Hz1/4. The favorable overall performances enable the sensor more suitable for altimetry.

  6. Geometry optimization for micro-pressure sensor considering dynamic interference

    SciTech Connect

    Yu, Zhongliang; Zhao, Yulong Li, Lili; Tian, Bian; Li, Cun

    2014-09-15

    Presented is the geometry optimization for piezoresistive absolute micro-pressure sensor. A figure of merit called the performance factor (PF) is defined as a quantitative index to describe the comprehensive performances of a sensor including sensitivity, resonant frequency, and acceleration interference. Three geometries are proposed through introducing islands and sensitive beams into typical flat diaphragm. The stress distributions of sensitive elements are analyzed by finite element method. Multivariate fittings based on ANSYS simulation results are performed to establish the equations about surface stress, deflection, and resonant frequency. Optimization by MATLAB is carried out to determine the dimensions of the geometries. Convex corner undercutting is evaluated. Each PF of the three geometries with the determined dimensions is calculated and compared. Silicon bulk micromachining is utilized to fabricate the prototypes of the sensors. The outputs of the sensors under both static and dynamic conditions are tested. Experimental results demonstrate the rationality of the defined performance factor and reveal that the geometry with quad islands presents the highest PF of 210.947 Hz{sup 1/4}. The favorable overall performances enable the sensor more suitable for altimetry.

  7. Robust optimization of contaminant sensor placement for community water systems.

    SciTech Connect

    Konjevod, Goran; Carr, Robert D.; Greenberg, Harvey J.; Hart, William Eugene; Morrison, Tod; Phillips, Cynthia Ann; Lin, Henry; Lauer, Erik

    2004-09-01

    We present a series of related robust optimization models for placing sensors in municipal water networks to detect contaminants that are maliciously or accidentally injected.We formulate sensor placement problems as mixed-integer programs, for which the objective coefficients are not known with certainty. We consider a restricted absolute robustness criteria that is motivated by natural restrictions on the uncertain data, and we define three robust optimization models that differ in how the coefficients in the objective vary. Under one set of assumptions there exists a sensor placement that is optimal for all admissible realizations of the coefficients. Under other assumptions, we can apply sorting to solve each worst-case realization efficiently, or we can apply duality to integrate the worst-case outcome and have one integer program. The most difficult case is where the objective parameters are bilinear, and we prove its complexity is NP-hard even under simplifying assumptions. We consider a relaxation that provides an approximation, giving an overall guarantee of nearoptimality when used with branch-and-bound search. We present preliminary computational experiments that illustrate the computational complexity of solving these robust formulations on sensor placement applications.

  8. Optimization of fingernail sensor design based on fingernail imaging

    NASA Astrophysics Data System (ADS)

    Abu-Khalaf, Jumana M.; Mascaro, Stephen A.

    2010-08-01

    This paper describes the optimization of fingernail sensors for measuring fingertip touch forces for human-computer interaction. The fingernail sensor uses optical reflectance photoplethysmography to measure the change in blood perfusion in the fingernail bed when the fingerpad touches a surface with various forces. In the original fingernail sensor, color changes observed through the fingernail have been measured by mounting an array of six LEDs (Light Emitting Diodes) and eight photodetectors on the fingernail in a laterally symmetric configuration. The optical components were located such that each photodiode had at least one neighboring LED. The role of each of the photodetectors was investigated in terms of the effect of removing one or more photodetectors on force prediction estimation. The analysis suggested designing the next generation of fingernail sensors with less than eight photodetectors. This paper proposes an optimal redesign by analyzing a photographic catalog composed of six different force poses, representing average fingernail coloration patterns of fifteen human subjects. It also introduces an optical model that describes light transmission between an LED and a photodiode, and predicts the optimal locations of the optoelectronic devices in the fingernail area.

  9. Fast obstacle detection based on multi-sensor information fusion

    NASA Astrophysics Data System (ADS)

    Lu, Linli; Ying, Jie

    2014-11-01

    Obstacle detection is one of the key problems in areas such as driving assistance and mobile robot navigation, which cannot meet the actual demand by using a single sensor. A method is proposed to realize the real-time access to the information of the obstacle in front of the robot and calculating the real size of the obstacle area according to the mechanism of the triangle similarity in process of imaging by fusing datum from a camera and an ultrasonic sensor, which supports the local path planning decision. In the part of image analyzing, the obstacle detection region is limited according to complementary principle. We chose ultrasonic detection range as the region for obstacle detection when the obstacle is relatively near the robot, and the travelling road area in front of the robot is the region for a relatively-long-distance detection. The obstacle detection algorithm is adapted from a powerful background subtraction algorithm ViBe: Visual Background Extractor. We extracted an obstacle free region in front of the robot in the initial frame, this region provided a reference sample set of gray scale value for obstacle detection. Experiments of detecting different obstacles at different distances respectively, give the accuracy of the obstacle detection and the error percentage between the calculated size and the actual size of the detected obstacle. Experimental results show that the detection scheme can effectively detect obstacles in front of the robot and provide size of the obstacle with relatively high dimensional accuracy.

  10. Unique sensor fusion system for coordinate-measuring machine tasks

    NASA Astrophysics Data System (ADS)

    Nashman, Marilyn; Yoshimi, Billibon; Hong, Tsai Hong; Rippey, William G.; Herman, Martin

    1997-09-01

    This paper describes a real-time hierarchical system that fuses data from vision and touch sensors to improve the performance of a coordinate measuring machine (CMM) used for dimensional inspection tasks. The system consists of sensory processing, world modeling, and task decomposition modules. It uses the strengths of each sensor -- the precision of the CMM scales and the analog touch probe and the global information provided by the low resolution camera -- to improve the speed and flexibility of the inspection task. In the experiment described, the vision module performs all computations in image coordinate space. The part's boundaries are extracted during an initialization process and then the probe's position is continuously updated as it scans and measures the part surface. The system fuses the estimated probe velocity and distance to the part boundary in image coordinates with the estimated velocity and probe position provided by the CMM controller. The fused information provides feedback to the monitor controller as it guides the touch probe to scan the part. We also discuss integrating information from the vision system and the probe to autonomously collect data for 2-D to 3-D calibration, and work to register computer aided design (CAD) models with images of parts in the workplace.