Science.gov

Sample records for optimal sensor fusion

  1. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  2. An Optimal Pulse System Design by Multichannel Sensors Fusion.

    PubMed

    Wang, Dimin; Zhang, David; Lu, Guangming

    2016-03-01

    Pulse diagnosis, recognized as an important branch of traditional Chinese medicine (TCM), has a long history for health diagnosis. Certain features in the pulse are known to be related with the physiological status, which have been identified as biomarkers. In recent years, an electronic equipment is designed to obtain the valuable information inside pulse. Single-point pulse acquisition platform has the benefit of low cost and flexibility, but is time consuming in operation and not standardized in pulse location. The pulse system with a single-type sensor is easy to implement, but is limited in extracting sufficient pulse information. This paper proposes a novel system with optimal design that is special for pulse diagnosis. We combine a pressure sensor with a photoelectric sensor array to make a multichannel sensor fusion structure. Then, the optimal pulse signal processing methods and sensor fusion strategy are introduced for the feature extraction. Finally, the developed optimal pulse system and methods are tested on pulse database acquired from the healthy subjects and the patients known to be afflicted with diabetes. The experimental results indicate that the classification accuracy is increased significantly under the optimal design and also demonstrate that the developed pulse system with multichannel sensors fusion is more effective than the previous pulse acquisition platforms.

  3. Optimal sensor fusion for land vehicle navigation

    SciTech Connect

    Morrow, J.D.

    1990-10-01

    Position location is a fundamental requirement in autonomous mobile robots which record and subsequently follow x,y paths. The Dept. of Energy, Office of Safeguards and Security, Robotic Security Vehicle (RSV) program involves the development of an autonomous mobile robot for patrolling a structured exterior environment. A straight-forward method for autonomous path-following has been adopted and requires digitizing'' the desired road network by storing x,y coordinates every 2m along the roads. The position location system used to define the locations consists of a radio beacon system which triangulates position off two known transponders, and dead reckoning with compass and odometer. This paper addresses the problem of combining these two measurements to arrive at a best estimate of position. Two algorithms are proposed: the optimal'' algorithm treats the measurements as random variables and minimizes the estimate variance, while the average error'' algorithm considers the bias in dead reckoning and attempts to guarantee an average error. Data collected on the algorithms indicate that both work well in practice. 2 refs., 7 figs.

  4. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach

    PubMed Central

    Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz

    2017-01-01

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857

  5. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.

    PubMed

    Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz

    2017-05-19

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.

  6. Analytical performance evaluation for autonomous sensor fusion

    NASA Astrophysics Data System (ADS)

    Chang, K. C.

    2008-04-01

    A distributed data fusion system consists of a network of sensors, each capable of local processing and fusion of sensor data. There has been a great deal of work in developing distributed fusion algorithms applicable to a network centric architecture. Currently there are at least a few approaches including naive fusion, cross-correlation fusion, information graph fusion, maximum a posteriori (MAP) fusion, channel filter fusion, and covariance intersection fusion. However, in general, in a distributed system such as the ad hoc sensor networks, the communication architecture is not fixed. Each node has knowledge of only its local connectivity but not the global network topology. In those cases, the distributed fusion algorithm based on information graph type of approach may not scale due to its requirements to carry long pedigree information for decorrelation. In this paper, we focus on scalable fusion algorithms and conduct analytical performance evaluation to compare their performance. The goal is to understand the performance of those algorithms under different operating conditions. Specifically, we evaluate the performance of channel filter fusion, Chernoff fusion, Shannon Fusion, and Battachayya fusion algorithms. We also compare their results to NaÃve fusion and "optimal" centralized fusion algorithms under a specific communication pattern.

  7. Multi-sensor optimal H∞ fusion filters for delayed nonlinear intelligent systems based on a unified model.

    PubMed

    Liu, Meiqin; Zhang, Senlin; Jin, Yaochu

    2011-04-01

    This paper is concerned with multi-sensor optimal H(∞) fusion filtering for a class of nonlinear intelligent systems with time delays. A unified model consisting of a linear dynamic system and a bounded static nonlinear operator is employed to describe these systems, such as neural networks and Takagi and Sugeno (T-S) fuzzy models. Based on the H(∞) performance analysis of this unified model using the linear matrix inequality (LMI) approach, centralized and distributed fusion filters are designed for multi-sensor time-delayed systems to guarantee the asymptotic stability of the fusion error systems and to reduce the influence of noise on the filtering error. The parameters of these filters are obtained by solving the eigenvalue problem (EVP). As most artificial neural networks or fuzzy systems with or without time delays can be described with this unified model, fusion filter design for these systems can be done in a unified way. Simulation examples are provided to illustrate the design procedure and effectiveness of the proposed approach. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Optimization of Automatic Target Recognition with a Reject Option Using Fusion and Correlated Sensor Data

    DTIC Science & Technology

    2005-04-25

    organizational , informational, perceptual; as appropriate to a given system’s mission. As the class of relationships estimated and the numbers of...recent transition from military applications to a global encompassment of industrial , medical and other fields, the relationship of sensor fusion...Bias Bias zK (t)z1(t) z2(t) x1(t) y2(t) xI(t)x2(t) y1(t) yJ(t) . . .y2(t-1)y1(t-1) yJ(t-1) Input Nodes Context Nodes w1i,j and w 1 jo,j w2j,k

  9. Soldier systems sensor fusion

    NASA Astrophysics Data System (ADS)

    Brubaker, Kathryne M.

    1998-08-01

    This paper addresses sensor fusion and its applications in emerging Soldier Systems integration and the unique challenges associated with the human platform. Technology that,provides the highest operational payoff in a lightweight warrior system must not only have enhanced capabilities, but have low power components resulting in order of magnitude reductions coupled with significant cost reductions. These reductions in power and cost will be achieved through partnership with industry and leveraging of commercial state of the art advancements in microelectronics and power sources. As new generation of full solution fire control systems (to include temperature, wind and range sensors) and target acquisition systems will accompany a new generation of individual combat weapons and upgrade existing weapon systems. Advanced lightweight thermal, IR, laser and video senors will be used for surveillance, target acquisition, imaging and combat identification applications. Multifunctional sensors will provide embedded training features in combat configurations allowing the soldier to 'train as he fights' without the traditional cost and weight penalties associated with separate systems. Personal status monitors (detecting pulse, respiration rate, muscle fatigue, core temperature, etc.) will provide commanders and highest echelons instantaneous medical data. Seamless integration of GPS and dead reckoning (compass and pedometer) and/or inertial sensors will aid navigation and increase position accuracy. Improved sensors and processing capability will provide earlier detection of battlefield hazards such as mines, enemy lasers and NBC (nuclear, biological, chemical) agents. Via the digitized network the situational awareness database will automatically be updated with weapon, medical, position and battlefield hazard data. Soldier Systems Sensor Fusion will ultimately establish each individual soldier as an individual sensor on the battlefield.

  10. Improved GSO optimized ESN soft-sensor model of flotation process based on multisource heterogeneous information fusion.

    PubMed

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process.

  11. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  12. Sensor fusion for airborne landmine detection

    NASA Astrophysics Data System (ADS)

    Schatten, Miranda A.; Gader, Paul D.; Bolton, Jeremy; Zare, Alina; Mendez-Vasquez, Andres

    2006-05-01

    Sensor fusion has become a vital research area for mine detection because of the countermine community's conclusion that no single sensor is capable of detecting mines at the necessary detection and false alarm rates over a wide variety of operating conditions. The U. S. Army Night Vision and Electronic Sensors Directorate (NVESD) evaluates sensors and algorithms for use in a multi-sensor multi-platform airborne detection modality. A large dataset of hyperspectral and radar imagery exists from the four major data collections performed at U. S. Army temperate and arid testing facilities in Autumn 2002, Spring 2003, Summer 2004, and Summer 2005. There are a number of algorithm developers working on single-sensor algorithms in order to optimize feature and classifier selection for that sensor type. However, a given sensor/algorithm system has an absolute limitation based on the physical phenomena that system is capable of sensing. Therefore, we perform decision-level fusion of the outputs from single-channel algorithms and we choose to combine systems whose information is complementary across operating conditions. That way, the final fused system will be robust to a variety of conditions, which is a critical property of a countermine detection system. In this paper, we present the analysis of fusion algorithms on data from a sensor suite consisting of high frequency radar imagery combined with hyperspectral long-wave infrared sensor imagery. The main type of fusion being considered is Choquet integral fusion. We evaluate performance achieved using the Choquet integral method for sensor fusion versus Boolean and soft "and," "or," mean, or majority voting.

  13. Multi-sensor fusion development

    NASA Astrophysics Data System (ADS)

    Bish, Sheldon; Rohrer, Matthew; Scheffel, Peter; Bennett, Kelly

    2016-05-01

    The U.S. Army Research Laboratory (ARL) and McQ Inc. are developing a generic sensor fusion architecture that involves several diverse processes working in combination to create a dynamic task-oriented, real-time informational capability. Processes include sensor data collection, persistent and observational data storage, and multimodal and multisensor fusion that includes the flexibility to modify the fusion program rules for each mission. Such a fusion engine lends itself to a diverse set of sensing applications and architectures while using open-source software technologies. In this paper, we describe a fusion engine architecture that combines multimodal and multi-sensor fusion within an Open Standard for Unattended Sensors (OSUS) framework. The modular, plug-and-play architecture of OSUS allows future fusion plugin methodologies to have seamless integration into the fusion architecture at the conceptual and implementation level. Although beyond the scope of this paper, this architecture allows for data and information manipulation and filtering for an array of applications.

  14. Optimal Sensor Fusion for Structural Health Monitoring of Aircraft Composite Components

    DTIC Science & Technology

    2011-09-01

    Advanced Materials. Vol 3. #2., 2002. 16. COCONUT Project. 2001. Algorithms for Solving Nonlinear Constrained Optimization Problems: The State of the Art...http://www.mat.univie.ac.at/~neum/glopt/ coconut /2001 17. Pinter J. (Ed). 2006. Global Optimization: Scientific and Engineering Case Studies (Nonconvex Optimization and Its Applications). Springer, 2006

  15. Imaging sensor fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Currie, Nicholas C.; Demma, Fred J.; Ferris, David D., Jr.; McMillan, Robert W.; Wicks, Michael C.; Zyga, Kathleen

    1997-02-01

    Sensors are needed for concealed weapon detection which perform better with regard to weapon classification, identification, probability of detection and false alarm rate than the magnetic sensors commonly used in airports. We have concluded that no single sensor will meet the requirements for a reliable concealed weapon detector and thus that sensor fusion is required to optimize detection probability and false alarm rate by combining sensor outputs in a synergistic fashion. This paper describes microwave, millimeter wave, far infrared, infrared, x-ray, acoustic, and magnetic sensors which have some promise in the field of concealed weapon detection. The strengths and weaknesses of these devices are discussed, and examples of the outputs of most of them are given. Various approaches to fusion of these sensors are also described, from simple cuing of one sensor by another to improvement of image quality by using multiple systems. It is further concluded that none of the sensors described herein will ever replace entirely the airport metal detector, but that many of them meet needs imposed by applications requiring a higher detection probability and lower false alarm rate.

  16. Sensor fusion with application to electronic warfare

    NASA Astrophysics Data System (ADS)

    Zanzalari, Robert M.; Van Alstine, Edward

    1999-03-01

    The Night Vision and Electronics Sensors Directorate, Survivability/Camouflage, Concealment and Deception Division mission is to provide affordable aircraft and ground electronic sensor/systems and signature management technologies which enhance survivability and lethality of US and International Forces. Since 1992, efforts have been undertaken in the area of Situational Awareness and Dominant Battlespace Knowledge. These include the Radar Deception and Jamming Advanced Technology Demonstration (ATD), Survivability and Targeting System Integration, Integrated Situation Awareness and Targeting ATD, Combat Identification, Ground Vehicle Situational Awareness, and Combined Electronic Intelligence Target Correlation. This paper will address the Situational Awareness process as it relates to the integration of Electronic Warfare (EW) with targeting and intelligence and information warfare systems. Discussion will be presented on the Sensor Fusion, Situation Assessment and Response Management Strategies. Sensor Fusion includes the association, correlation, and combination of data and information from single and multiple sources to achieve refined position and identity estimates, and complete and timely assessments of situations and threats as well as their significance. Situation Assessment includes the process of interpreting and expressing the environmnet based on situation abstract products and information from technical and doctrinal data bases. Finally, Response Management provides the centralized, adaptable control of all renewable and expendable countermeasure assets resulting in optimization of the response to the threat environment.

  17. Two-dimensional optimal sensor placement

    SciTech Connect

    Zhang, H.

    1995-05-01

    A method for determining the optimal two-dimensional spatial placement of multiple sensors participating in a robot perception task is introduced in this paper. This work is motivated by the fact that sensor data fusion is an effective means of reducing uncertainties in sensor observations, and that the combined uncertainty varies with the relative placement of the sensors with respect to each other. The problem of optimal sensor placement is formulated and a solution is presented in the two dimensional space. The algebraic structure of the combined sensor uncertainty with respect to the placement of sensor is studied. A necessary condition for optimal placement is derived and this necessary condition is used to obtain an efficient closed-form solution for the global optimal placement. Numerical examples are provided to illustrate the effectiveness and efficiency of the solution. 11 refs.

  18. Sensor fusion for intelligent alarm analysis

    SciTech Connect

    Nelson, C.L.; Fitzgerald, D.S.

    1995-03-01

    The purpose of an intelligent alarm analysis system is to provide complete and manageable information to a central alarm station operator by applying alarm processing and fusion techniques to sensor information. This paper discusses the sensor fusion approach taken to perform intelligent alarm analysis for the Advanced Exterior Sensor (AES). The AES is an intrusion detection and assessment system designed for wide-area coverage, quick deployment, low false/nuisance alarm operation, and immediate visual assessment. It combines three sensor technologies (visible, infrared, and millimeter wave radar) collocated on a compact and portable remote sensor module. The remote sensor module rotates at a rate of 1 revolution per second to detect and track motion and provide assessment in a continuous 360` field-of-regard. Sensor fusion techniques are used to correlate and integrate the track data from these three sensors into a single track for operator observation. Additional inputs to the fusion process include environmental data, knowledge of sensor performance under certain weather conditions, sensor priority, and recent operator feedback. A confidence value is assigned to the track as a result of the fusion process. This helps to reduce nuisance alarms and to increase operator confidence in the system while reducing the workload of the operator.

  19. Sensor management in an ASW data fusion system

    NASA Astrophysics Data System (ADS)

    Penny, Dawn E.

    1999-03-01

    The Multi-Sensor Fusion Management (MSFM) algorithm positions multiple, detection-only, passive sensors in a 2D plane to optimize the fused probability of detection using a simple decision fusion method. Previously the MSFM algorithm was evaluated on two synthetic problem domains comprising of both static and moving targets. In the original formulation the probability distribution of the target location was modelled using a non-parametric approach. The logarithm of the fused detection probability was used as a criterion function for the optimisation of the sensor positions. This optimisation used a straightforward gradient ascent approach, which occasionally found local optima. Following the placement optimisation the sensors were deployed and the individual sensor detections combined using a logical OR fusion rule. The target location distribution could then be updated using the method of sampling, importance re-sampling (SIR). In the current work the algorithm is extended to admit a richer variety of behaviour. More realistic sensor characteristic models are used which include detection-plus-bearing sensors and false alarm probabilities commensurate with actual sonar sensor systems. In this paper the performance of the updated MSFM algorithm is illustrated on a realistic anti-submarine warfare (ASW) application in which the placement of the sensors is carried out incrementally, allowing for the optimisation of both the location and the number of sensors to be deployed.

  20. Distributed Sensor Fusion Performance Analysis Under an Uncertain Environment

    DTIC Science & Technology

    2012-10-01

    cannot be obtained accurately, the sub-optimal fusion processor is assumed to have an estimated correlation coefficient and its performance difference...detectability indices for the sub-optimal and optimal cases is derived as a function of the true correlation coefficient , the estimated value, and the...performance is to a mismatched estimation of the correlation coefficient . Furthermore, we show that for the special case where all local sensors have the

  1. Sensor fusion for mobile robot navigation

    SciTech Connect

    Kam, M.; Zhu, X.; Kalata, P.

    1997-01-01

    The authors review techniques for sensor fusion in robot navigation, emphasizing algorithms for self-location. These find use when the sensor suite of a mobile robot comprises several different sensors, some complementary and some redundant. Integrating the sensor readings, the robot seeks to accomplish tasks such as constructing a map of its environment, locating itself in that map, and recognizing objects that should be avoided or sought. The review describes integration techniques in two categories: low-level fusion is used for direct integration of sensory data, resulting in parameter and state estimates; high-level fusion is used for indirect integration of sensory data in hierarchical architectures, through command arbitration and integration of control signals suggested by different modules. The review provides an arsenal of tools for addressing this (rather ill-posed) problem in machine intelligence, including Kalman filtering, rule-based techniques, behavior based algorithms and approaches that borrow from information theory, Dempster-Shafer reasoning, fuzzy logic and neural networks. It points to several further-research needs, including: robustness of decision rules; simultaneous consideration of self-location, motion planning, motion control and vehicle dynamics; the effect of sensor placement and attention focusing on sensor fusion; and adaptation of techniques from biological sensor fusion.

  2. Enhanced chemical weapon warning via sensor fusion

    NASA Astrophysics Data System (ADS)

    Flaherty, Michael; Pritchett, Daniel; Cothren, Brian; Schwaiger, James

    2011-05-01

    Torch Technologies Inc., is actively involved in chemical sensor networking and data fusion via multi-year efforts with Dugway Proving Ground (DPG) and the Defense Threat Reduction Agency (DTRA). The objective of these efforts is to develop innovative concepts and advanced algorithms that enhance our national Chemical Warfare (CW) test and warning capabilities via the fusion of traditional and non-traditional CW sensor data. Under Phase I, II, and III Small Business Innovative Research (SBIR) contracts with DPG, Torch developed the Advanced Chemical Release Evaluation System (ACRES) software to support non real-time CW sensor data fusion. Under Phase I and II SBIRs with DTRA in conjunction with the Edgewood Chemical Biological Center (ECBC), Torch is using the DPG ACRES CW sensor data fuser as a framework from which to develop the Cloud state Estimation in a Networked Sensor Environment (CENSE) data fusion system. Torch is currently developing CENSE to implement and test innovative real-time sensor network based data fusion concepts using CW and non-CW ancillary sensor data to improve CW warning and detection in tactical scenarios.

  3. Case for like-sensor predetection fusion

    SciTech Connect

    Willett, P.; Alford, M.; Vannicola, V.

    1994-10-01

    There has been a great deal of theoretical study into decentralized detection networks composed of similar (often identical), independent sensors, and this has produced a number of satisfying theoretical results. At this point it is perhaps worth asking whether or not there is a great deal of point to such study - certainly two sensors can provide twice the illumination of one, but what does this really translate to in terms of performance? We take as our metric the ground area covered with a specified Neyman-Pearson detection performance. To be fair, the comparison will be of a multisensor network to a single-sensor system where both have the same aggregate transmitter power. The situations examined are by no means exhaustive but are, we believe, representative. Is there a case? The answer, as might be expected, is `sometimes.` When the statistical situation is well behaved there is very little benefit to a fused system; however, when the environment is hostile the gains can be significant. We see, depending on the situation, gains from colocation, gains from separation, optimal gains from operation at a `fusion range,` and sometimes no gains at all. 13 refs.

  4. Evaluation of taste solutions by sensor fusion

    SciTech Connect

    Kojima, Yohichiro; Sato, Eriko; Atobe, Masahiko; Nakashima, Miki; Kato, Yukihisa; Nonoue, Koichi; Yamano, Yoshimasa

    2009-05-23

    In our previous studies, properties of taste solutions were discriminated based on sound velocity and amplitude of ultrasonic waves propagating through the solutions. However, to make this method applicable to beverages which contain many taste substances, further studies are required. In this study, the waveform of an ultrasonic wave with frequency of approximately 5 MHz propagating through a solution was measured and subjected to frequency analysis. Further, taste sensors require various techniques of sensor fusion to effectively obtain chemical and physical parameter of taste solutions. A sensor fusion method of ultrasonic wave sensor and various sensors, such as the surface plasmon resonance (SPR) sensor, to estimate tastes were proposed and examined in this report. As a result, differences among pure water and two basic taste solutions were clearly observed as differences in their properties. Furthermore, a self-organizing neural network was applied to obtained data which were used to clarify the differences among solutions.

  5. State-based sensor fusion for surveillance

    NASA Astrophysics Data System (ADS)

    Murphy, Robin R.

    1992-04-01

    This paper presents a state-based control scheme for sensor fusion in autonomous mobile robots. States specify the sensing strategy for each sensor; the feedback rule to be applied to the sensors; and a set of failure conditions, which signal abnormal or inconsistent evidence. Experiments were conducted in the surveillance domain, where the robot was to determine if three different areas in a cluttered tool room remained unchanged after each visit. The data collected from four sensors (a Sony Hi8 color camcorder, a Pulnix black and white camera, an Inframetrics true infrared camera, and Polaroid ultrasonic transducers) and fused using the sensor fusion effects architecture (SFX) support the claims that the state-based control scheme produces percepts which are consistent with the scene being viewed, can improve the global belief in a percept, can improve the sensing quality of the robot, and it robust under a variety of conditions.

  6. Optimum geometry selection for sensor fusion

    NASA Astrophysics Data System (ADS)

    Kadar, Ivan

    1998-07-01

    A relative sensors-to-target geometry measure-of-merit (MOM), based on the Geometric Dilution of Precision (GDOP) measure, is developed. The method of maximum likelihood estimation is introduced for the solution of the position location problem. A linearized measurement model-based error sensitivity analysis is used to derive an expression for the GDOP MOM. The GDOP MOM relates the sensor measurement errors to the target position errors as a function of sensors-to-target geometry. In order to illustrate the efficacy of GDOP MOM for fusion systems, GDOP functional relationships are computed for bearing-only measuring sensors-to-target geometries. The minimum GDOP and associated specific target-to-sensors geometries are computed and illustrated for both two and three bearing-only measuring sensors. Two and three-dimensional plots of relative error contours provide a geometric insight to sensor placement as a function of geometry induced error dilution. The results can be used to select preferred target- to-sensor(s) geometries for M sensors in this application. The GDOP MOM is general and is readily extendable to other measurement-based sensors and fusion architectures.

  7. Fusion of Images from Dissimilar Sensor Systems

    DTIC Science & Technology

    2004-12-01

    based fusion concepts and presents results demonstrating the robustness of the approach. Final remarks are provided in Chapter V. 3 II. BACKGROUND A...multiresolution analysis” methods. Image fusion by the statistical and numerical approach utilizes methods such as Principal Component Analysis ( PCA ) and...represent the pixel intensities in LWIR and MWIR sensors respectively. They are statistically decomposed using PCA into orthogonal components L1’ and

  8. Minimum energy information fusion in sensor networks

    SciTech Connect

    Chapline, G

    1999-05-11

    In this paper we consider how to organize the sharing of information in a distributed network of sensors and data processors so as to provide explanations for sensor readings with minimal expenditure of energy. We point out that the Minimum Description Length principle provides an approach to information fusion that is more naturally suited to energy minimization than traditional Bayesian approaches. In addition we show that for networks consisting of a large number of identical sensors Kohonen self-organization provides an exact solution to the problem of combing the sensor outputs into minimal description length explanations.

  9. Health-Enabled Smart Sensor Fusion Technology

    NASA Technical Reports Server (NTRS)

    Wang, Ray

    2012-01-01

    A process was designed to fuse data from multiple sensors in order to make a more accurate estimation of the environment and overall health in an intelligent rocket test facility (IRTF), to provide reliable, high-confidence measurements for a variety of propulsion test articles. The object of the technology is to provide sensor fusion based on a distributed architecture. Specifically, the fusion technology is intended to succeed in providing health condition monitoring capability at the intelligent transceiver, such as RF signal strength, battery reading, computing resource monitoring, and sensor data reading. The technology also provides analytic and diagnostic intelligence at the intelligent transceiver, enhancing the IEEE 1451.x-based standard for sensor data management and distributions, as well as providing appropriate communications protocols to enable complex interactions to support timely and high-quality flow of information among the system elements.

  10. A least squares fusion rule in multiple sensors distributed detection systems

    NASA Astrophysics Data System (ADS)

    Aziz, A. M.

    In this paper, a new least square data fusion rule in multiple sensor distributed detection system is proposed. In the proposed approach, the central processor combines the sensors hard decisions through least squares criterion to make the global hard decision of the central processor. In contrast to the optimum Neyman-Pearson fusion, where the distributed detection system is optimized at the fusion center level or at the sensors level, but not simultaneously, the proposed approach achieves global optimization at both the fusion center and at the distributed sensors levels. This is done without knowing the error probabilities of each individual distributed sensor. Thus the proposed least squares fusion rule does not rely on any stability of the noise environment and of the sensors false alarm and detection probabilities. Therefore, the proposed least squares fusion rule is robust and achieves better global performance. Furthermore, the proposed method can easily be applied to any number of sensors and any type of distributed observations. The performance of the proposed least squares fusion rule is evaluated and compared to the optimum Neyman-Pearson fusion rule. The results show that the performance of the proposed least squares fusion rule outperforms the performance of the Neyman-Pearson fusion rule.

  11. Downscaling soil moisture over East Asia through multi-sensor data fusion and optimization of regression trees

    NASA Astrophysics Data System (ADS)

    Park, Seonyoung; Im, Jungho; Park, Sumin; Rhee, Jinyoung

    2017-04-01

    optimization based on pruning of rules derived from the modified regression trees was conducted. Root Mean Square Error (RMSE) and Correlation coefficients (r) were used to optimize the rules, and finally 59 rules from modified regression trees were produced. The results show high validation r (0.79) and low validation RMSE (0.0556m3/m3). The 1 km downscaled soil moisture was evaluated using ground soil moisture data at 14 stations, and both soil moisture data showed similar temporal patterns (average r=0.51 and average RMSE=0.041). The spatial distribution of the 1 km downscaled soil moisture well corresponded with GLDAS soil moisture that caught both extremely dry and wet regions. Correlation between GLDAS and the 1 km downscaled soil moisture during growing season was positive (mean r=0.35) in most regions.

  12. Evaluating fusion techniques for multi-sensor satellite image data

    SciTech Connect

    Martin, Benjamin W; Vatsavai, Raju

    2013-01-01

    Satellite image data fusion is a topic of interest in many areas including environmental monitoring, emergency response, and defense. Typically any single satellite sensor cannot provide all of the benefits offered by a combination of different sensors (e.g., high-spatial but low spectral resolution vs. low-spatial but high spectral, optical vs. SAR). Given the respective strengths and weaknesses of the different types of image data, it is beneficial to fuse many types of image data to extract as much information as possible from the data. Our work focuses on the fusion of multi-sensor image data into a unified representation that incorporates the potential strengths of a sensor in order to minimize classification error. Of particular interest is the fusion of optical and synthetic aperture radar (SAR) images into a single, multispectral image of the best possible spatial resolution. We explore various methods to optimally fuse these images and evaluate the quality of the image fusion by using K-means clustering to categorize regions in the fused images and comparing the accuracies of the resulting categorization maps.

  13. Sensor fusion for precision agriculture

    USDA-ARS?s Scientific Manuscript database

    Information-based management of crop production systems known as precision agriculture relies on different sensor technologies aimed at characterization of spatial heterogeneity of a cropping environment. Remote and proximal sensing systems have been deployed to obtain high-resolution data pertainin...

  14. Distributed Algorithms for Sensor Fusion

    DTIC Science & Technology

    2007-11-02

    estimation problem. That is, suppose k sensors fire. We wish to determine Pr(TANK | k). Simple algebra gives: Pr(TANK | k) = Pr(k | TANK) Pr(TANK) Pr(k...Cambridge, MA: MIT Press 1998). [28] W. Kohn and A. Nerode, “Multiple Agent Hybrid Control Architecture”, in R. L. Grossman , A. Nerode, A. Ravn and H. Rischel

  15. Sensor fusion for improved indoor navigation

    NASA Astrophysics Data System (ADS)

    Emilsson, Erika; Rydell, Joakim

    2012-09-01

    A reliable indoor positioning system providing high accuracy has the potential to increase the safety of first responders and military personnel significantly. To enable navigation in a broad range of environments and obtain more accurate and robust positioning results, we propose a multi-sensor fusion approach. We describe and evaluate a positioning system, based on sensor fusion between a foot-mounted inertial measurement unit (IMU) and a camera-based system for simultaneous localization and mapping (SLAM). The complete system provides accurate navigation in many relevant environments without depending on preinstalled infrastructure. The camera-based system uses both inertial measurements and visual data, thereby enabling navigation also in environments and scenarios where one of the sensors provides unreliable data during a few seconds. When sufficient light is available, the camera-based system generally provides good performance. The foot-mounted system provides accurate positioning when distinct steps can be detected, e.g., during walking and running, even in dark or smoke-filled environments. By combining the two systems, the integrated positioning system can be expected to enable accurate navigation in almost all kinds of environments and scenarios. In this paper we present results from initial tests, which show that the proposed sensor fusion improves the navigation solution considerably in scenarios where either the foot-mounted or camera-based system is unable to navigate on its own.

  16. Robot navigation using simple sensor fusion

    SciTech Connect

    Jollay, D.M.; Ricks, R.E.

    1988-01-01

    Sensors on an autonomous mobile system are essential in enviornment determination for navigation purposes. As is well documented in previous publications, sonar sensors are inadequate in providing a depiction of a real world environment and therefore do not provide accurate information for navigation, it not used in conjunction with another type of sensor. This paper describes a simple, inexpensive, and relatively fast navigation algorithm involving vision and sonar sensor fusion for use in navigating an autonomous robot in an unknown and potentially dynamic environment. Navigation of the mobile robot was accomplished by use of a TV camera as the primary sensor. Input data received from the camera were digitized through a video module and then processed using a dedicated vision system to enable detection of obstacles and to determine edge positions relative to the robot. Since 3D vision was not attempted due to its complex and time consuming nature, sonar sensors were then sued as secondary sensors in order to determine the proximity of detected obstacles. By then fusing the sensor data, the robot was able to navigate (quickly and collision free) to a given goal, achieving obstacle avoidance in real-time.

  17. Multimodal Sensor Fusion for Personnel Detection

    DTIC Science & Technology

    2011-07-01

    Multimodal Sensor Fusion for Personnel Detection Xin Jin Shalabh Gupta Asok Ray Department of Mechanical Engineering The Pennsylvania State...have con- sidered relations taken only two at a time, but we propose to explore relations between higher order cliques as future work. D. Feature...detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 577–589, 2001. [11] A. Ray , “Symbolic dynamic analysis

  18. Data fusion in entangled networks of quantum sensors

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco; Jitrik, Oliverio; Uhlmann, Jeffrey; Venegas-Andraca, Salvador E.

    2017-05-01

    In this paper we discuss two potential areas of intersection between Quantum Information Technologies and Information Fusion. The first area we call Quantum (Data Fusion) and refers to the use of quantum computers to perform data fusion algorithms with classical data generated by quantum and classical sensors. As we discuss, we expect that these quantum fusion algorithms will have a better computational complexity than traditional fusion algorithms. This means that quantum computers could allow the efficient fusion of large data sets for complex multi-target tracking. On the other hand, (Quantum Data) Fusion refers to the fusion of quantum data that is being generated by quantum sensors. The output of the quantum sensors is considered in the form of qubits, and a quantum computer performs data fusion algorithms. Our theoretical models suggest that we expect that these algorithms can increase the sensitivity of the quantum sensor network.

  19. Non-verbal communication through sensor fusion

    NASA Astrophysics Data System (ADS)

    Tairych, Andreas; Xu, Daniel; O'Brien, Benjamin M.; Anderson, Iain A.

    2016-04-01

    When we communicate face to face, we subconsciously engage our whole body to convey our message. In telecommunication, e.g. during phone calls, this powerful information channel cannot be used. Capturing nonverbal information from body motion and transmitting it to the receiver parallel to speech would make these conversations feel much more natural. This requires a sensing device that is capable of capturing different types of movements, such as the flexion and extension of joints, and the rotation of limbs. In a first embodiment, we developed a sensing glove that is used to control a computer game. Capacitive dielectric elastomer (DE) sensors measure finger positions, and an inertial measurement unit (IMU) detects hand roll. These two sensor technologies complement each other, with the IMU allowing the player to move an avatar through a three-dimensional maze, and the DE sensors detecting finger flexion to fire weapons or open doors. After demonstrating the potential of sensor fusion in human-computer interaction, we take this concept to the next level and apply it in nonverbal communication between humans. The current fingerspelling glove prototype uses capacitive DE sensors to detect finger gestures performed by the sending person. These gestures are mapped to corresponding messages and transmitted wirelessly to another person. A concept for integrating an IMU into this system is presented. The fusion of the DE sensor and the IMU combines the strengths of both sensor types, and therefore enables very comprehensive body motion sensing, which makes a large repertoire of gestures available to nonverbal communication over distances.

  20. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  1. Phenomenology-Based Inverse Scattering for Sensor Information Fusion

    DTIC Science & Technology

    2006-09-15

    SENSOR INFORMATION FUSION Kung-Hau Ding 15 September 2006 Final Report Approved for Public Release; Distribution...Hanscom AFB MA 01731-2909 TECHNICAL REPORT Title: Phenomenology-Based Inverse Scattering for Sensors Information Fusion Unlimited, Statement A...Scattering for Sensor Information Fusion 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) 5d. PROJECT NUMBER 2304 Kung

  2. Sensor Fusion and Smart Sensor in Sports and Biomedical Applications

    PubMed Central

    Mendes, José Jair Alves; Vieira, Mário Elias Marinho; Pires, Marcelo Bissi; Stevan, Sergio Luiz

    2016-01-01

    The following work presents an overview of smart sensors and sensor fusion targeted at biomedical applications and sports areas. In this work, the integration of these areas is demonstrated, promoting a reflection about techniques and applications to collect, quantify and qualify some physical variables associated with the human body. These techniques are presented in various biomedical and sports applications, which cover areas related to diagnostics, rehabilitation, physical monitoring, and the development of performance in athletes, among others. Although some applications are described in only one of two fields of study (biomedicine and sports), it is very likely that the same application fits in both, with small peculiarities or adaptations. To illustrate the contemporaneity of applications, an analysis of specialized papers published in the last six years has been made. In this context, the main characteristic of this review is to present the largest quantity of relevant examples of sensor fusion and smart sensors focusing on their utilization and proposals, without deeply addressing one specific system or technique, to the detriment of the others. PMID:27669260

  3. Sensor Fusion and Smart Sensor in Sports and Biomedical Applications.

    PubMed

    Mendes, José Jair Alves; Vieira, Mário Elias Marinho; Pires, Marcelo Bissi; Stevan, Sergio Luiz

    2016-09-23

    The following work presents an overview of smart sensors and sensor fusion targeted at biomedical applications and sports areas. In this work, the integration of these areas is demonstrated, promoting a reflection about techniques and applications to collect, quantify and qualify some physical variables associated with the human body. These techniques are presented in various biomedical and sports applications, which cover areas related to diagnostics, rehabilitation, physical monitoring, and the development of performance in athletes, among others. Although some applications are described in only one of two fields of study (biomedicine and sports), it is very likely that the same application fits in both, with small peculiarities or adaptations. To illustrate the contemporaneity of applications, an analysis of specialized papers published in the last six years has been made. In this context, the main characteristic of this review is to present the largest quantity of relevant examples of sensor fusion and smart sensors focusing on their utilization and proposals, without deeply addressing one specific system or technique, to the detriment of the others.

  4. Method and apparatus for sensor fusion

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)

    1991-01-01

    Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.

  5. Statistical modeling and data fusion of automotive sensors for object detection applications in a driving environment

    NASA Astrophysics Data System (ADS)

    Hurtado, Miguel A.

    In this work, we consider the application of classical statistical inference to the fusion of data from different sensing technologies for object detection applications in order to increase the overall performance for a given active safety automotive system. Research evolved mainly around a centralized sensor fusion architecture assuming that three non-identical sensors, modeled by corresponding probability density functions (pdfs), provide discrete information of target being present or absent with associated probabilities of detection and false alarm for the sensor fusion engine. The underlying sensing technologies are the following standard automotive sensors: 24.5 GHz radar, high dynamic range infrared camera and a laser-radar. A complete mathematical framework was developed to select the optimal decision rule based on a generalized multinomial distribution resulting from a sum of weighted Bernoulli random variables from the Neyman-Pearson lemma and the likelihood ratio test. Moreover, to better understand the model and to obtain upper bounds on the performance of the fusion rules, we assumed exponential pdfs for each sensor and a parallel mathematical expression was obtained based on a generalized gamma distribution resulting from a sum of weighted exponential random variables for the situation when the continuous random vector of information is available. Mathematical expressions and results were obtained for modeling the following case scenarios: (i) non-identical sensors, (ii) identical sensors, (iii) combination of nonidentical and identical sensors, (iv) faulty sensor operation, (v) dominant sensor operation, (vi) negative sensor operation, and (vii) distributed sensor fusion. The second and final part of this research focused on: (a) simulation of statistical models for each sensing technology, (b) comparisons with distributed fusion, (c) overview of dynamic sensor fusion and adaptive decision rules.

  6. Sensor fusion for intelligent behavior on small unmanned ground vehicles

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Ahuja, G.; Sights, B.; Pacis, E. B.; Everett, H. R.

    2007-04-01

    Sensors commonly mounted on small unmanned ground vehicles (UGVs) include visible light and thermal cameras, scanning LIDAR, and ranging sonar. Sensor data from these sensors is vital to emerging autonomous robotic behaviors. However, sensor data from any given sensor can become noisy or erroneous under a range of conditions, reducing the reliability of autonomous operations. We seek to increase this reliability through data fusion. Data fusion includes characterizing the strengths and weaknesses of each sensor modality and combining their data in a way such that the result of the data fusion provides more accurate data than any single sensor. We describe data fusion efforts applied to two autonomous behaviors: leader-follower and human presence detection. The behaviors are implemented and tested in a variety of realistic conditions.

  7. Cognitive foundations for model-based sensor fusion

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Weijers, Bertus; Mutz, Chris W.

    2003-08-01

    Target detection, tracking, and sensor fusion are complicated problems, which usually are performed sequentially. First detecting targets, then tracking, then fusing multiple sensors reduces computations. This procedure however is inapplicable to difficult targets which cannot be reliably detected using individual sensors, on individual scans or frames. In such more complicated cases one has to perform functions of fusing, tracking, and detecting concurrently. This often has led to prohibitive combinatorial complexity and, as a consequence, to sub-optimal performance as compared to the information-theoretic content of all the available data. It is well appreciated that in this task the human mind is by far superior qualitatively to existing mathematical methods of sensor fusion, however, the human mind is limited in the amount of information and speed of computation it can cope with. Therefore, research efforts have been devoted toward incorporating "biological lessons" into smart algorithms, yet success has been limited. Why is this so, and how to overcome existing limitations? The fundamental reasons for current limitations are analyzed and a potentially breakthrough research and development effort is outlined. We utilize the way our mind combines emotions and concepts in the thinking process and present the mathematical approach to accomplishing this in the current technology computers. The presentation will summarize the difficulties encountered by intelligent systems over the last 50 years related to combinatorial complexity, analyze the fundamental limitations of existing algorithms and neural networks, and relate it to the type of logic underlying the computational structure: formal, multivalued, and fuzzy logic. A new concept of dynamic logic will be introduced along with algorithms capable of pulling together all the available information from multiple sources. This new mathematical technique, like our brain, combines conceptual understanding with

  8. A Fusion Architecture for Tracking a Group of People Using a Distributed Sensor Network

    DTIC Science & Technology

    2013-07-01

    capability to identify the targets at closer ranges. It is also assumed that the UGS is equipped with electronics to communicate to its immediate... homeowners and others as motion sensors to alert them of intruders. PIR sensors register the thermal radiation emitted by targets and objects in its field...Optimal data fusion in multiple sensor detection systems,” IEEE Transaction on Aerospace and Electronic Systems, vol. AES-22, no. 1, pp. 98–101, January

  9. Sensor fusion for intelligent process control.

    SciTech Connect

    Connors, John J.; Hill, Kevin; Hanekamp, David; Haley, William F.; Gallagher, Robert J.; Gowin, Craig; Farrar, Arthur R.; Sheaffer, Donald A.; DeYoung, Mark A.; Bertram, Lee A.; Dodge, Craig; Binion, Bruce; Walsh, Peter M.; Houf, William G.; Desam, Padmabhushana R.; Tiwary, Rajiv; Stokes, Michael R.; Miller, Alan J.; Michael, Richard W.; Mayer, Raymond M.; Jiao, Yu; Smith, Philip J.; Arbab, Mehran; Hillaire, Robert G.

    2004-08-01

    An integrated system for the fusion of product and process sensors and controls for production of flat glass was envisioned, having as its objective the maximization of throughput and product quality subject to emission limits, furnace refractory wear, and other constraints. Although the project was prematurely terminated, stopping the work short of its goal, the tasks that were completed show the value of the approach and objectives. Though the demonstration was to have been done on a flat glass production line, the approach is applicable to control of production in the other sectors of the glass industry. Furthermore, the system architecture is also applicable in other industries utilizing processes in which product uniformity is determined by ability to control feed composition, mixing, heating and cooling, chemical reactions, and physical processes such as distillation, crystallization, drying, etc. The first phase of the project, with Visteon Automotive Systems as industrial partner, was focused on simulation and control of the glass annealing lehr. That work produced the analysis and computer code that provide the foundation for model-based control of annealing lehrs during steady state operation and through color and thickness changes. In the second phase of the work, with PPG Industries as the industrial partner, the emphasis was on control of temperature and combustion stoichiometry in the melting furnace, to provide a wider operating window, improve product yield, and increase energy efficiency. A program of experiments with the furnace, CFD modeling and simulation, flow measurements, and sensor fusion was undertaken to provide the experimental and theoretical basis for an integrated, model-based control system utilizing the new infrastructure installed at the demonstration site for the purpose. In spite of the fact that the project was terminated during the first year of the second phase of the work, the results of these first steps toward implementation

  10. Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu

    2015-01-01

    Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908

  11. Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    PubMed Central

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme. PMID:22408479

  12. Sensor placement optimization in buildings

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Tisato, Francesco

    2012-01-01

    In this work we address the problem of optimal sensor placement for a given region and task. An important issue in designing sensor arrays is the appropriate placement of the sensors such that they achieve a predefined goal. There are many problems that could be considered in the placement of multiple sensors. In this work we focus on the four problems identified by Hörster and Lienhart. To solve these problems, we propose an algorithm based on Direct Search, which is able to approach the global optimal solution within reasonable time and memory consumption. The algorithm is experimentally evaluated and the results are presented on two real floorplans. The experimental results show that our DS algorithm is able to improve the results given by the most performing heuristic introduced in. The algorithm is then extended to work also on continuous solution spaces, and 3D problems.

  13. Principles of data-fusion in multi-sensor systems for non-destructive testing

    NASA Astrophysics Data System (ADS)

    Chioclea, Shmuel; Dickstein, Phineas

    2000-05-01

    In recent years, there has been progress in the application of measurement and control systems that engage multi-sensor arrays. Several algorithms and techniques have been developed for the integration of the information obtained from the sensors. The fusion of the data may be complicated due to the fact that each sensor has its own performance characteristics, and because different sensors may detect different physical phenomena. As a result, data fusion turns out to be a multidisciplinary field, which applies principles adopted from other fields such as signal processing, artificial intelligence, statistics, and The Theory of Information. The data fusion machine tries to imitate the human brain, in combining data from numerous sensors and making optimal inferences about the environment. The present paper provides a critical review of data fusion algorithms and techniques and a trenchant summary of the experience gained to date from the several preliminary NDT studies which have been applying multi-sensor data fusion systems. Consequently, this paper provides a list of rules and criteria to be followed in future applications of data fusion to nondestructive testing.

  14. A Multiprocessor-Based Sensor Fusion Software Architecture

    NASA Astrophysics Data System (ADS)

    Moxon, Bruce C.

    1988-03-01

    The ability to reason with information from a variety of sources is critical to the development of intelligent autonomous systems. Multisensor integration, or sensor fusion, is an area of research that attempts to provide a computational framework in which such perceptual reasoning can quickly and effectively be applied, enabling autonomous systems to function in unstructured, unconstrained environments. In this paper, the fundamental characteristics of the sensor fusion problem are explored. An hierarchical sensor fusion software architecture is presented as a computational framework in which information from complementary sensors is effectively combined. The concept of a sensor fusion pyramid is introduced, along with three unique computational abstractions: virtual sensors, virtual effectors, and focus of attention processing. The computing requirements of this sensor fusion architecture are investigated, and the blackboard system model is proposed as a computational methodology on which to build a sensor fusion software architecture. Finally, the Butterfly Parallel Processor is presented as a computer architecture that provides the computational capabilities required to support these intelligent systems applications.

  15. Sparse Downscaling and Adaptive Fusion of Multi-sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula, E.

    2011-12-01

    The past decades have witnessed a remarkable emergence of new sources of multiscale multi-sensor precipitation data including data from global spaceborne active and passive sensors, regional ground based weather surveillance radars and local rain-gauges. Resolution enhancement of remotely sensed rainfall and optimal integration of multi-sensor data promise a posteriori estimates of precipitation fluxes with increased accuracy and resolution to be used in hydro-meteorological applications. In this context, new frameworks are proposed for resolution enhancement and multiscale multi-sensor precipitation data fusion, which capitalize on two main observations: (1) sparseness of remotely sensed precipitation fields in appropriately chosen transformed domains, (e.g., in wavelet space) which promotes the use of the newly emerged theory of sparse representation and compressive sensing for resolution enhancement; (2) a conditionally Gaussian Scale Mixture (GSM) parameterization in the wavelet domain which allows exploiting the efficient linear estimation methodologies, while capturing the non-Gaussian data structure of rainfall. The proposed methodologies are demonstrated using a data set of coincidental observations of precipitation reflectivity images by the spaceborne precipitation radar (PR) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite and ground-based NEXRAD weather surveillance Doppler radars. Uniqueness and stability of the solution, capturing non-Gaussian singular structure of rainfall, reduced uncertainty of estimation and efficiency of computation are the main advantages of the proposed methodologies over the commonly used standard Gaussian techniques.

  16. Facility Monitoring: A Qualitative Theory for Sensor Fusion

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2001-01-01

    Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.

  17. Facility Monitoring: A Qualitative Theory for Sensor Fusion

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2001-01-01

    Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.

  18. ELIPS: Toward a Sensor Fusion Processor on a Chip

    NASA Technical Reports Server (NTRS)

    Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James

    1998-01-01

    The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.

  19. A survey of multi-sensor data fusion systems

    NASA Astrophysics Data System (ADS)

    Linn, R. J.; Hall, D. L.; Llinas, J.

    1991-08-01

    Multisensor data fusion integrates data from multiple sensors (and types of sensors) to perform inferences which are more accurate and specific than those from processing single-sensor data. Levels of inference range from target detection and identification to higher level situation assessment and threat assessment. This paper provides a survey of more than 50 data fusion systems and summarizes their application, development environment, system status and key techniques. The techniques are mapped to a taxonomy previously developed by Hall and Linn (1990); these include positional fusion techniques, such as association and estimation, and identity fusion methods, including statistical methods, nonparametric methods, and cognitive techniques (e.g. templating, knowledge-based systems, and fuzzy reasoning). An assessment of the state of fusion system development is provided.

  20. Driver drowsiness detection using multimodal sensor fusion

    NASA Astrophysics Data System (ADS)

    Andreeva, Elena O.; Aarabi, Parham; Philiastides, Marios G.; Mohajer, Keyvan; Emami, Majid

    2004-04-01

    This paper proposes a multi-modal sensor fusion algorithm for the estimation of driver drowsiness. Driver sleepiness is believed to be responsible for more than 30% of passenger car accidents and for 4% of all accident fatalities. In commercial vehicles, drowsiness is blamed for 58% of single truck accidents and 31% of commercial truck driver fatalities. This work proposes an innovative automatic sleep-onset detection system. Using multiple sensors, the driver"s body is studied as a mechanical structure of springs and dampeners. The sleep-detection system consists of highly sensitive triple-axial accelerometers to monitor the driver"s upper body in 3-D. The subject is modeled as a linear time-variant (LTV) system. An LMS adaptive filter estimation algorithm generates the transfer function (i.e. weight coefficients) for this LTV system. Separate coefficients are generated for the awake and asleep states of the subject. These coefficients are then used to train a neural network. Once trained, the neural network classifies the condition of the driver as either awake or asleep. The system has been tested on a total of 8 subjects. The tests were conducted on sleep-deprived individuals for the sleep state and on fully awake individuals for the awake state. When trained and tested on the same subject, the system detected sleep and awake states of the driver with a success rate of 95%. When the system was trained on three subjects and then retested on a fourth "unseen" subject, the classification rate dropped to 90%. Furthermore, it was attempted to correlate driver posture and sleepiness by observing how car vibrations propagate through a person"s body. Eight additional subjects were studied for this purpose. The results obtained in this experiment proved inconclusive which was attributed to significant differences in the individual habitual postures.

  1. An Alternate View Of Munition Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Mayersak, J. R.

    1988-08-01

    An alternate multimode sensor fusion scheme is treated. The concept is designed to acquire and engage high value relocatable targets in a lock-on-after-launch sequence. The approach uses statistical decision concepts to determine the authority to be assigned to each mode in the acquisition sequence voting and decision process. Statistical target classification and recognition in the engagement sequence is accomplished through variable length feature vectors set by adaptive logics. The approach uses multiple decision for acquisition and classification, in the number of spaces selected, is adaptively weighted and adjusted. The scheme uses type of climate -- arctic, temperate, desert, and equatorial -- diurnal effects --- time of day -- type of background, type of countermeasures present -- signature suppresssion or obscuration, false target decoy or electronic warfare -- and other factors to make these selections. The approach is discussed in simple terms. Voids and deficiencies in the statistical data base used to train such algorithms is discussed. The approach is being developed to engage deep battle targets such as surface-to-surface missile systems, air defense units and self-propelled artillery.

  2. Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking.

    PubMed

    Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng

    2017-05-06

    Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors' numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms.

  3. Multisensor fusion with non-optimal decision rules: the challenges of open world sensing

    NASA Astrophysics Data System (ADS)

    Minor, Christian; Johnson, Kevin

    2014-05-01

    In this work, simple, generic models of chemical sensing are used to simulate sensor array data and to illustrate the impact on overall system performance that specific design choices impart. The ability of multisensor systems to perform multianalyte detection (i.e., distinguish multiple targets) is explored by examining the distinction between fundamental design-related limitations stemming from mismatching of mixture composition to fused sensor measurement spaces, and limitations that arise from measurement uncertainty. Insight on the limits and potential of sensor fusion to robustly address detection tasks in realistic field conditions can be gained through an examination of a) the underlying geometry of both the composition space of sources one hopes to elucidate and the measurement space a fused sensor system is capable of generating, and b) the informational impact of uncertainty on both of these spaces. For instance, what is the potential impact on sensor fusion in an open world scenario where unknown interferants may contaminate target signals? Under complex and dynamic backgrounds, decision rules may implicitly become non-optimal and adding sensors may increase the amount of conflicting information observed. This suggests that the manner in which a decision rule handles sensor conflict can be critical in leveraging sensor fusion for effective open world sensing, and becomes exponentially more important as more sensors are added. Results and design considerations for handling conflicting evidence in Bayes and Dempster-Shafer fusion frameworks are presented. Bayesian decision theory is used to provide an upper limit on detector performance of simulated sensor systems.

  4. Research on the strategy of underwater united detection fusion and communication using multi-sensor

    NASA Astrophysics Data System (ADS)

    Xu, Zhenhua; Huang, Jianguo; Huang, Hai; Zhang, Qunfei

    2011-09-01

    In order to solve the distributed detection fusion problem of underwater target detection, when the signal to noise ratio (SNR) of the acoustic channel is low, a new strategy for united detection fusion and communication using multiple sensors was proposed. The performance of detection fusion was studied and compared based on the Neyman-Pearson principle when the binary phase shift keying (BPSK) and on-off keying (OOK) modes were used by the local sensors. The comparative simulation and analysis between the optimal likelihood ratio test and the proposed strategy was completed, and both the theoretical analysis and simulation indicate that using the proposed new strategy could improve the detection performance effectively. In theory, the proposed strategy of united detection fusion and communication is of great significance to the establishment of an underwater target detection system.

  5. Dynamic gesture recognition based on multiple sensors fusion technology.

    PubMed

    Wenhui, Wang; Xiang, Chen; Kongqiao, Wang; Xu, Zhang; Jihai, Yang

    2009-01-01

    This paper investigates the roles of a three-axis accelerometer, surface electromyography sensors and a webcam for dynamic gesture recognition. A decision-level multiple sensor fusion method based on action elements is proposed to distinguish a set of 20 kinds of dynamic hand gestures. Experiments are designed and conducted to collect three kinds of sensor data stream simultaneously during gesture implementation and compare the performance of different subsets in gesture recognition. Experimental results from three subjects show that the combination of three kinds of sensor achieves recognition accuracies at 87.5%-91.8%, which are higher largely than that of the single sensor conditions. This study is valuable to realize continuous and dynamic gesture recognition based on multiple sensor fusion technology for multi-model interaction.

  6. Soft adaptive fusion of sensor energy for large-scale sensor networks (SAFE)

    NASA Astrophysics Data System (ADS)

    Rababaah, Haroun; Shirkhodaie, Amir

    2009-04-01

    Target tracking for network surveillance systems has gained significant interest especially in sensitive areas such as homeland security, battlefield intelligence, and facility surveillance. Most of the current sensor network protocols do not address the need for multi-sensor fusion-based target tracking schemes, which is crucial for the longevity of the sensor network. In this paper, we present an efficient fusion model for target tracking in a cluster-based large sensor networks. This new scheme is inspired by the image processing techniques by perceiving a sensor network as an energy map of sensor stimuli and applying typical image processing techniques on this map such as: filtering, convolution, clustering, segmentation, etc to achieve high-level perceptions and understanding of the situation. The new fusion model is called Soft Adaptive Fusion of Sensor Energies (SAFE). SAFE performs soft fusion of the energies collected by a local region of sensors in a large-scale sensor network. This local fusion is then transmitted by the head node to a base-station to update the common operation picture with evolving events of interest. Simulated scenarios showed that SAFE is promising by demonstrating a significant improvement in target tracking reliability, uncertainty, and efficiency.

  7. Physiological sensor signals classification for healthcare using sensor data fusion and case-based reasoning.

    PubMed

    Begum, Shahina; Barua, Shaibal; Ahmed, Mobyen Uddin

    2014-07-03

    Today, clinicians often do diagnosis and classification of diseases based on information collected from several physiological sensor signals. However, sensor signal could easily be vulnerable to uncertain noises or interferences and due to large individual variations sensitivity to different physiological sensors could also vary. Therefore, multiple sensor signal fusion is valuable to provide more robust and reliable decision. This paper demonstrates a physiological sensor signal classification approach using sensor signal fusion and case-based reasoning. The proposed approach has been evaluated to classify Stressed or Relaxed individuals using sensor data fusion. Physiological sensor signals i.e., Heart Rate (HR), Finger Temperature (FT), Respiration Rate (RR), Carbon dioxide (CO2) and Oxygen Saturation (SpO2) are collected during the data collection phase. Here, sensor fusion has been done in two different ways: (i) decision-level fusion using features extracted through traditional approaches; and (ii) data-level fusion using features extracted by means of Multivariate Multiscale Entropy (MMSE). Case-Based Reasoning (CBR) is applied for the classification of the signals. The experimental result shows that the proposed system could classify Stressed or Relaxed individual 87.5% accurately compare to an expert in the domain. So, it shows promising result in the psychophysiological domain and could be possible to adapt this approach to other relevant healthcare systems.

  8. Physiological Sensor Signals Classification for Healthcare Using Sensor Data Fusion and Case-Based Reasoning

    PubMed Central

    Begum, Shahina; Barua, Shaibal; Ahmed, Mobyen Uddin

    2014-01-01

    Today, clinicians often do diagnosis and classification of diseases based on information collected from several physiological sensor signals. However, sensor signal could easily be vulnerable to uncertain noises or interferences and due to large individual variations sensitivity to different physiological sensors could also vary. Therefore, multiple sensor signal fusion is valuable to provide more robust and reliable decision. This paper demonstrates a physiological sensor signal classification approach using sensor signal fusion and case-based reasoning. The proposed approach has been evaluated to classify Stressed or Relaxed individuals using sensor data fusion. Physiological sensor signals i.e., Heart Rate (HR), Finger Temperature (FT), Respiration Rate (RR), Carbon dioxide (CO2) and Oxygen Saturation (SpO2) are collected during the data collection phase. Here, sensor fusion has been done in two different ways: (i) decision-level fusion using features extracted through traditional approaches; and (ii) data-level fusion using features extracted by means of Multivariate Multiscale Entropy (MMSE). Case-Based Reasoning (CBR) is applied for the classification of the signals. The experimental result shows that the proposed system could classify Stressed or Relaxed individual 87.5% accurately compare to an expert in the domain. So, it shows promising result in the psychophysiological domain and could be possible to adapt this approach to other relevant healthcare systems. PMID:24995374

  9. Linear optimal control of tokamak fusion devices

    SciTech Connect

    Kessel, C.E.; Firestone, M.A.; Conn, R.W.

    1989-05-01

    The control of plasma position, shape and current in a tokamak fusion reactor is examined using linear optimal control. These advanced tokamaks are characterized by non up-down symmetric coils and structure, thick structure surrounding the plasma, eddy currents, shaped plasmas, superconducting coils, vertically unstable plasmas, and hybrid function coils providing ohmic heating, vertical field, radial field, and shaping field. Models of the electromagnetic environment in a tokamak are derived and used to construct control gains that are tested in nonlinear simulations with initial perturbations. The issues of applying linear optimal control to advanced tokamaks are addressed, including complex equilibrium control, choice of cost functional weights, the coil voltage limit, discrete control, and order reduction. Results indicate that the linear optimal control is a feasible technique for controlling advanced tokamaks where the more common classical control will be severely strained or will not work. 28 refs., 13 figs.

  10. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    PubMed Central

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  11. Optimality of Rate Balancing in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Tarighati, Alla; Jalden, Joakim

    2016-07-01

    We consider the problem of distributed binary hypothesis testing in a parallel network topology where sensors independently observe some phenomenon and send a finite rate summary of their observations to a fusion center for the final decision. We explicitly consider a scenario under which (integer) rate messages are sent over an error free multiple access channel, modeled by a sum rate constraint at the fusion center. This problem was previously studied by Chamberland and Veeravalli, who provided sufficient conditions for the optimality of one bit sensor messages. Their result is however crucially dependent on the feasibility of having as many one bit sensors as the (integer) sum rate constraint of the multiple access channel, an assumption that can often not be satisfied in practice. This prompts us to consider the case of an a-priori limited number of sensors and we provide sufficient condition under which having no two sensors with rate difference more than one bit, so called rate balancing, is an optimal strategy with respect to the Bhattacharyya distance between the hypotheses at the input to the fusion center. We further discuss explicit observation models under which these sufficient conditions are satisfied.

  12. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    PubMed Central

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M.; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T.J.

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927

  13. Decentralized sensor fusion for Ubiquitous Networking Robotics in Urban Areas.

    PubMed

    Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T J

    2010-01-01

    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted.

  14. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J. . Space Inst.); Pap, R.M.; Harston, C.T. )

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  15. A hierarchical structure approach to MultiSensor Information Fusion

    SciTech Connect

    Maren, A.J.; Pap, R.M.; Harston, C.T.

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  16. Sensor fusion methods for high performance active vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Collette, C.; Matichard, F.

    2015-04-01

    Sensor noise often limits the performance of active vibration isolation systems. Inertial sensors used in such systems can be selected through a wide variety of instrument noise and size characteristics. However, the most sensitive instruments are often the biggest and the heaviest. Consequently, high-performance active isolators sometimes embed many tens of kilograms in instrumentation. The weight and size of instrumentation can add unwanted constraint on the design. It tends to lower the structures natural frequencies and reduces the collocation between sensors and actuators. Both effects tend to reduce feedback control performance and stability. This paper discusses sensor fusion techniques that can be used in order to increase the control bandwidth (and/or the stability). For this, the low noise inertial instrument signal dominates the fusion at low frequency to provide vibration isolation. Other types of sensors (relative motion, smaller but noisier inertial, or force sensors) are used at higher frequencies to increase stability. Several sensor fusion configurations are studied. The paper shows the improvement that can be expected for several case studies including a rigid equipment, a flexible equipment, and a flexible equipment mounted on a flexible support structure.

  17. Sensor and information fusion for improved hostile fire situational awareness

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.; Ludwig, William D.

    2010-04-01

    A research-oriented Army Technology Objective (ATO) named Sensor and Information Fusion for Improved Hostile Fire Situational Awareness uniquely focuses on the underpinning technologies to detect and defeat any hostile threat; before, during, and after its occurrence. This is a joint effort led by the Army Research Laboratory, with the Armaments and the Communications and Electronics Research, Development, and Engineering Centers (CERDEC and ARDEC) partners. It addresses distributed sensor fusion and collaborative situational awareness enhancements, focusing on the underpinning technologies to detect/identify potential hostile shooters prior to firing a shot and to detect/classify/locate the firing point of hostile small arms, mortars, rockets, RPGs, and missiles after the first shot. A field experiment conducted addressed not only diverse modality sensor performance and sensor fusion benefits, but gathered useful data to develop and demonstrate the ad hoc networking and dissemination of relevant data and actionable intelligence. Represented at this field experiment were various sensor platforms such as UGS, soldier-worn, manned ground vehicles, UGVs, UAVs, and helicopters. This ATO continues to evaluate applicable technologies to include retro-reflection, UV, IR, visible, glint, LADAR, radar, acoustic, seismic, E-field, narrow-band emission and image processing techniques to detect the threats with very high confidence. Networked fusion of multi-modal data will reduce false alarms and improve actionable intelligence by distributing grid coordinates, detection report features, and imagery of threats.

  18. Development of a sensor integration strategy for robotic application based on geometric optimization

    NASA Astrophysics Data System (ADS)

    Nandi, Gora C.; Mitra, Debjani

    2001-03-01

    Sensor fusion is an important technology, which is growing exponentially due to its tremendous application potential. Appropriate fusion technology is needed to be developed specially when a system requires redundant sensors to be used. The more the redundancy in sensors, the more the computational complexity for controlling the system and the more is its intelligence level. This research presents a strategy developed for multiple sensor fusion, based on geometric optimization. Each sensor's uncertainty model has been developed. Using Lagrangian optimization techniques the individual sensor's uncertainty has been fused to reduce the overall uncertainty to generate a consensus among the sensors regarding their acceptable values. Using fission-fusion architecture, the precision level has further been improved. Subsequently, using feed back from the fused sensory information, the net error has further been minimized to any pre assigned value by developing a fusion technique in the differential domain (FDD). The techniques have been illustrated using synthesized data from two types of sensors (optical encoder and a single camera vision sensor). The application experience of the same fusion strategy in improving the precision of correctness of stereo matching using multiple baselines has also been discussed.

  19. Geometric methods for optimal sensor design.

    PubMed

    Belabbas, M-A

    2016-01-01

    The Kalman-Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design.

  20. Geometric methods for optimal sensor design

    PubMed Central

    Belabbas, M.-A.

    2016-01-01

    The Kalman–Bucy filter is the optimal estimator of the state of a linear dynamical system from sensor measurements. Because its performance is limited by the sensors to which it is paired, it is natural to seek optimal sensors. The resulting optimization problem is however non-convex. Therefore, many ad hoc methods have been used over the years to design sensors in fields ranging from engineering to biology to economics. We show in this paper how to obtain optimal sensors for the Kalman filter. Precisely, we provide a structural equation that characterizes optimal sensors. We furthermore provide a gradient algorithm and prove its convergence to the optimal sensor. This optimal sensor yields the lowest possible estimation error for measurements with a fixed signal-to-noise ratio. The results of the paper are proved by reducing the optimal sensor problem to an optimization problem on a Grassmannian manifold and proving that the function to be minimized is a Morse function with a unique minimum. The results presented here also apply to the dual problem of optimal actuator design. PMID:26997885

  1. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate

  2. Conflict management based on belief function entropy in sensor fusion.

    PubMed

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Wireless sensor network plays an important role in intelligent navigation. It incorporates a group of sensors to overcome the limitation of single detection system. Dempster-Shafer evidence theory can combine the sensor data of the wireless sensor network by data fusion, which contributes to the improvement of accuracy and reliability of the detection system. However, due to different sources of sensors, there may be conflict among the sensor data under uncertain environment. Thus, this paper proposes a new method combining Deng entropy and evidence distance to address the issue. First, Deng entropy is adopted to measure the uncertain information. Then, evidence distance is applied to measure the conflict degree. The new method can cope with conflict effectually and improve the accuracy and reliability of the detection system. An example is illustrated to show the efficiency of the new method and the result is compared with that of the existing methods.

  3. Oil exploration oriented multi-sensor image fusion algorithm

    NASA Astrophysics Data System (ADS)

    Xiaobing, Zhang; Wei, Zhou; Mengfei, Song

    2017-04-01

    In order to accurately forecast the fracture and fracture dominance direction in oil exploration, in this paper, we propose a novel multi-sensor image fusion algorithm. The main innovations of this paper lie in that we introduce Dual-tree complex wavelet transform (DTCWT) in data fusion and divide an image to several regions before image fusion. DTCWT refers to a new type of wavelet transform, and it is designed to solve the problem of signal decomposition and reconstruction based on two parallel transforms of real wavelet. We utilize DTCWT to segment the features of the input images and generate a region map, and then exploit normalized Shannon entropy of a region to design the priority function. To test the effectiveness of our proposed multi-sensor image fusion algorithm, four standard pairs of images are used to construct the dataset. Experimental results demonstrate that the proposed algorithm can achieve high accuracy in multi-sensor image fusion, especially for images of oil exploration.

  4. STAC: a comprehensive sensor fusion model for scene characterization

    NASA Astrophysics Data System (ADS)

    Kira, Zsolt; Wagner, Alan R.; Kennedy, Chris; Zutty, Jason; Tuell, Grady

    2015-05-01

    We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion algorithms for moving target identification (MTI) applications, what is the best way to combine image data having different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe and illustrate the STAC model, and compare it to other existing models.

  5. Navigation in Difficult Environments: Multi-Sensor Fusion Techniques

    DTIC Science & Technology

    2010-03-01

    data are applied to improve the robustness of secondary sensors’ signal processing. Applications of the multi-sensor fusion approach are illustrated...algorithms. 1.0 MOTIVATION Many existing and perspective applications of navigation systems would benefit notably from the ability to navigate...accurately and reliably in difficult environments. Examples of difficult navigation scenarios include urban canyons, indoor applications , radio

  6. Multiple image sensor data fusion through artificial neural networks

    USDA-ARS?s Scientific Manuscript database

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  7. Sensor feature fusion for detecting buried objects

    SciTech Connect

    Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.; Hernandez, J.E.; Buhl, M.R.; Schaich, P.C.; Kane, R.J.; Barth, M.J.; DelGrande, N.K.

    1993-04-01

    Given multiple registered images of the earth`s surface from dual-band sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. The sensor suite currently includes two sensors (5 micron and 10 micron wavelengths) and one ground penetrating radar (GPR) of the wide-band pulsed synthetic aperture type. We use a supervised teaming pattern recognition approach to detect metal and plastic land mines buried in soil. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in a two step process to classify a subimage. Thee first step, referred to as feature selection, determines the features of sub-images which result in the greatest separability among the classes. The second step, image labeling, uses the selected features and the decisions from a pattern classifier to label the regions in the image which are likely to correspond to buried mines. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the sensors add value to the detection system. The most important features from the various sensors are fused using supervised teaming pattern classifiers (including neural networks). We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing feature information from multiple sensor types, including dual-band infrared and ground penetrating radar. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved operational problem of detecting buried land mines from an airborne standoff platform.

  8. Context extraction for local fusion for landmine detection with multi-sensor systems

    NASA Astrophysics Data System (ADS)

    Frigui, Hichem; Gader, Paul D.; Ben Abdallah, Ahmed Chamseddine

    2009-05-01

    We present a local method for fusing the results of several landmine detectors using Ground Penetrating Radar (GPR) and Wideband Electro-Magnetic Induction (WEMI) sensors. The detectors considered include Edge Histogram Descriptor (EHD), Hidden Markov Models (HMM), and Spectral Correlation Feature (SCF) for the GPR sensor, and a feature-based classifier for the metal detector. The above detectors use different types of features and different classification methods. Our approach, called Context Extraction for Local Fusion with Feature Discrimination(CELF-FD), is a local approach that adapts the fusion method to different regions of the feature space. It is based on a novel objective function that combines context identification and multi-algorithm fusion criteria into a joint objective function. The context identification component thrives to partition the input feature space into clusters and identify the relevant features within each cluster. The fusion component thrives to learns the optimal fusion parameters within each cluster. Results on large and diverse GPR and WEMI data collections show that the proposed method can identify meaningful and coherent clusters and that these clusters require different fusion parameters. Our initial experiments have also indicated that CELF-FD outperforms the original CELF algorithm and all individual detectors.

  9. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling

    NASA Astrophysics Data System (ADS)

    Röhl, S.; Bodenstedt, S.; Küderle, C.; Suwelack, S.; Kenngott, H.; Müller-Stich, B. P.; Dillmann, R.; Speidel, S.

    2012-02-01

    Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the preoperative model. Haptic information which could complement the visual sensor data is still not established. In addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by intraoperative sensors. We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of force measurements, surface reconstruction and biomechanical modeling.

  10. LADAR And FLIR Based Sensor Fusion For Automatic Target Classification

    NASA Astrophysics Data System (ADS)

    Selzer, Fred; Gutfinger, Dan

    1989-01-01

    The purpose of this report is to show results of automatic target classification and sensor fusion for forward looking infrared (FLIR) and Laser Radar sensors. The sensor fusion data base was acquired from the Naval Weapon Center and it consists of coregistered Laser RaDAR (range and reflectance image), FLIR (raw and preprocessed image) and TV. Using this data base we have developed techniques to extract relevant object edges from the FLIR and LADAR which are correlated to wireframe models. The resulting correlation coefficients from both the LADAR and FLIR are fused using either the Bayesian or the Dempster-Shafer combination method so as to provide a higher confidence target classifica-tion level output. Finally, to minimize the correlation process the wireframe models are modified to reflect target range (size of target) and target orientation which is extracted from the LADAR reflectance image.

  11. Multi-sensor data fusion framework for CNC machining monitoring

    NASA Astrophysics Data System (ADS)

    Duro, João A.; Padget, Julian A.; Bowen, Chris R.; Kim, H. Alicia; Nassehi, Aydin

    2016-01-01

    Reliable machining monitoring systems are essential for lowering production time and manufacturing costs. Existing expensive monitoring systems focus on prevention/detection of tool malfunctions and provide information for process optimisation by force measurement. An alternative and cost-effective approach is monitoring acoustic emissions (AEs) from machining operations by acting as a robust proxy. The limitations of AEs include high sensitivity to sensor position and cutting parameters. In this paper, a novel multi-sensor data fusion framework is proposed to enable identification of the best sensor locations for monitoring cutting operations, identifying sensors that provide the best signal, and derivation of signals with an enhanced periodic component. Our experimental results reveal that by utilising the framework, and using only three sensors, signal interpretation improves substantially and the monitoring system reliability is enhanced for a wide range of machining parameters. The framework provides a route to overcoming the major limitations of AE based monitoring.

  12. Sensor Fusion for Nuclear Proliferation Activity Monitoring

    SciTech Connect

    Adel Ghanem, Ph D

    2007-03-30

    The objective of Phase 1 of this STTR project is to demonstrate a Proof-of-Concept (PoC) of the Geo-Rad system that integrates a location-aware SmartTag (made by ZonTrak) and a radiation detector (developed by LLNL). It also includes the ability to transmit the collected radiation data and location information to the ZonTrak server (ZonService). The collected data is further transmitted to a central server at LLNL (the Fusion Server) to be processed in conjunction with overhead imagery to generate location estimates of nuclear proliferation and radiation sources.

  13. Multi-Sensor Data Fusion for Future Telematics Application

    NASA Astrophysics Data System (ADS)

    Kim, Seong-Baek; Lee, Seung-Yong; Choi, Ji-Hoon; Choi, Kyung-Ho; Jang, Byung-Tae

    2003-12-01

    In this paper, we present multi-sensor data fusion for telematics application. Successful telematics can be realized through the integration of navigation and spatial information. The well-determined acquisition of vehicle's position plays a vital role in application service. The development of GPS is used to provide the navigation data, but the performance is limited in areas where poor satellite visibility environment exists. Hence, multi-sensor fusion including IMU (Inertial Measurement Unit), GPS (Global Positioning System), and DMI (Distance Measurement Indicator) is required to provide the vehicle's position to service provider and driver behind the wheel. The multi-sensor fusion is implemented via algorithm based on Kalman Filtering technique. Navigation accuracy can be enhanced using this filtering approach. For the verification of fusion approach, land vehicle test was performed and the results were discussed. Results showed that the horizontal position errors were suppressed around 1 meter level accuracy under simulated Non-GPS availability environment. Under normal GPS environment, the horizontal position errors were under 40 cm in curve trajectory and 27cm in linear trajectory, which are definitely depending on vehicular dynamics.

  14. A data fusion method in wireless sensor networks.

    PubMed

    Izadi, Davood; Abawajy, Jemal H; Ghanavati, Sara; Herawan, Tutut

    2015-01-28

    The success of a Wireless Sensor Network (WSN) deployment strongly depends on the quality of service (QoS) it provides regarding issues such as data accuracy, data aggregation delays and network lifetime maximisation. This is especially challenging in data fusion mechanisms, where a small fraction of low quality data in the fusion input may negatively impact the overall fusion result. In this paper, we present a fuzzy-based data fusion approach for WSN with the aim of increasing the QoS whilst reducing the energy consumption of the sensor network. The proposed approach is able to distinguish and aggregate only true values of the collected data as such, thus reducing the burden of processing the entire data at the base station (BS). It is also able to eliminate redundant data and consequently reduce energy consumption thus increasing the network lifetime. We studied the effectiveness of the proposed data fusion approach experimentally and compared it with two baseline approaches in terms of data collection, number of transferred data packets and energy consumption. The results of the experiments show that the proposed approach achieves better results than the baseline approaches.

  15. A Data Fusion Method in Wireless Sensor Networks

    PubMed Central

    Izadi, Davood; Abawajy, Jemal H.; Ghanavati, Sara; Herawan, Tutut

    2015-01-01

    The success of a Wireless Sensor Network (WSN) deployment strongly depends on the quality of service (QoS) it provides regarding issues such as data accuracy, data aggregation delays and network lifetime maximisation. This is especially challenging in data fusion mechanisms, where a small fraction of low quality data in the fusion input may negatively impact the overall fusion result. In this paper, we present a fuzzy-based data fusion approach for WSN with the aim of increasing the QoS whilst reducing the energy consumption of the sensor network. The proposed approach is able to distinguish and aggregate only true values of the collected data as such, thus reducing the burden of processing the entire data at the base station (BS). It is also able to eliminate redundant data and consequently reduce energy consumption thus increasing the network lifetime. We studied the effectiveness of the proposed data fusion approach experimentally and compared it with two baseline approaches in terms of data collection, number of transferred data packets and energy consumption. The results of the experiments show that the proposed approach achieves better results than the baseline approaches. PMID:25635417

  16. Optimal Sensor Locations for System Identification

    DTIC Science & Technology

    1988-03-01

    Element Model . 19 3. A METHODOLOGY FOR OPTIMAL SENSOR LOCATIONS FOR PARAMETRIC IDENTIFICATION ................. 37 3.1. Introduction... parametric identification of structural systems depends on the location at which sensors are placed and data gathered, very little by way of a...picture on optimal sensor locations for parametric identification in a noisy measurement 6 z, -. -" environment. Section IV deals with an important aspect

  17. A weighting/threshold approach to sensor fusion

    SciTech Connect

    Amai, W.A.

    1988-01-01

    A weighting/threshold-based sensor fusion algorithm to decrease the false alarm rate (FAR) while maintaining a high probability of detection (PD) is being tested in the Remote Security Station (RSS). The RSS is being developed to provide temporary intrusion-detection capability on short notice. It consists of a portable, multisensor pod connected by cable to a manned control console. The pod is set up outdoors in the location that security is needed; the console and operator are located in a command bunker up to a kilometer away. The RSS software filters out alarms from low-believability sensors and also filters out alarms in low-priority areas. Each sensor's believability is proportionally enclosed as a weighing, which is continually updated as a function of the environmental conditions affecting that sensor. Area priority is proportionally encoded as a threshold value for each pie-wedge area around the pod. When an event in an area triggers one or more sensors, their weightings are summed and then compared to the area threshold value. The operator is informed of the event only if the summed weighting exceeds the threshold. Extensive field testing has not yet been done, but some results show the current sensor fusion algorithm decreases the FAR at the expense of lowering the PD. To increase the PD while retaining a low FAR, the weighting/threshold algorithm will be modified to use temporal data and pattern recognition. 4 refs., 2 figs.

  18. Sensor-fusion-based biometric identity verification

    SciTech Connect

    Carlson, J.J.; Bouchard, A.M.; Osbourn, G.C.; Martinez, R.F.; Bartholomew, J.W.; Jordan, J.B.; Flachs, G.M.; Bao, Z.; Zhu, L.

    1998-02-01

    Future generation automated human biometric identification and verification will require multiple features/sensors together with internal and external information sources to achieve high performance, accuracy, and reliability in uncontrolled environments. The primary objective of the proposed research is to develop a theoretical and practical basis for identifying and verifying people using standoff biometric features that can be obtained with minimal inconvenience during the verification process. The basic problem involves selecting sensors and discovering features that provide sufficient information to reliably verify a person`s identity under the uncertainties caused by measurement errors and tactics of uncooperative subjects. A system was developed for discovering hand, face, ear, and voice features and fusing them to verify the identity of people. The system obtains its robustness and reliability by fusing many coarse and easily measured features into a near minimal probability of error decision algorithm.

  19. Sensor fusion in a dynamic environment

    NASA Astrophysics Data System (ADS)

    Huntsberger, Terrance L.

    1992-11-01

    The recent trend towards dynamic vision has led to the need for real-time performance in various vision and control algorithms. Some of the burden placed on algorithms using purely visual input can be lessened using multiple disparate sensors. Research into the integration of information from disparate sensors while moving through an environment has for the main part concentrated on static environments. Moving obstacles complicate tasks such as avoidance and path planning. In this paper we present a system which integrates range and visual sensory inputs for the dynamic analysis of motion within the field of view of an autonomous platform. The approach we follow combines some recently developed neural network motion analysis algorithms with an epipolar plane image technique. We report the results of some experiments on a synthesized visible/range sequence.

  20. The Human Factors of Sensor Fusion

    DTIC Science & Technology

    2008-05-01

    and it was also in contrast to systems where federated or isolated sensors, acting as single sources and providing information for a single purpose...formal logic is used, humans often use more of the inductive process (called the method of discovery) or infer from the specific to the general ( Searles ...unusual cases where ambiguity is very high. If a situation does not appear to make sense or fit an established pattern, it very well may be some act of

  1. Dynamic network based learning systems for sensor information fusion

    NASA Astrophysics Data System (ADS)

    Verma, Dinesh; Julier, Simon

    2017-05-01

    In order to get the modularity and reconfigurability for sensor information fusion services in modern battle-spaces, dynamic service composition and dynamic topology determination is needed. In the current state-of-the-art, such information fusion services are composed manually and in a programmatic manner. In this paper, we consider an approach towards more automation by assuming that the topology of a solution is provided, and automatically choosing the different types and kinds of algorithms which can be used at each step. This includes the use of contextual information and techniques such as multi-arm bandits for investing the exploration and exploitation tradeoff.

  2. Compressive Sensing Image Fusion Based on Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Li, X.; Lv, J.; Jiang, S.; Zhou, H.

    2017-09-01

    In order to solve the problem that the spatial matching is difficult and the spectral distortion is large in traditional pixel-level image fusion algorithm. We propose a new method of image fusion that utilizes HIS transformation and the recently developed theory of compressive sensing that is called HIS-CS image fusion. In this algorithm, the particle swarm optimization algorithm is used to select the fusion coefficient ω. In the iterative process, the image fusion coefficient ω is taken as particle, and the optimal value is obtained by combining the optimal objective function. Then we use the compression-aware weighted fusion algorithm for remote sensing image fusion, taking the coefficient ω as the weight value. The algorithm ensures the optimal selection of fusion effect with a certain degree of self-adaptability. To evaluate the fused images, this paper uses five kinds of index parameters such as Entropy, Standard Deviation, Average Gradient, Degree of Distortion and Peak Signal-to-Noise Ratio. The experimental results show that the image fusion effect of the algorithm in this paper is better than that of traditional methods.

  3. Computation of Optimal Actuator/Sensor Locations

    DTIC Science & Technology

    2013-12-26

    Journal of Computational Science , 2012. submitted. [8] D. Kasinathan, K. A. Morris, and S. D. Yang. Calculation of h∞-optimal actuator locations for... COMPUTATION OF OPTIMAL ACTUATOR/SENSOR LOCATIONS KIRSTEN MORRIS UNIVERSITY OF WATERLOO 12/26/2013 Final Report DISTRIBUTION A: Distribution approved...To) Oct. 1 2010-Sept. 30 2013 4. TITLE AND SUBTITLE Computation of Optimal Actuator/Sensor Locations 5a. CONTRACT NUMBER 5b

  4. Data fusion of multiple kinect sensors for a rehabilitation system.

    PubMed

    Huibin Du; Yiwen Zhao; Jianda Han; Zheng Wang; Guoli Song

    2016-08-01

    Kinect-like depth sensors have been widely used in rehabilitation systems. However, single depth sensor processes limb-blocking, data loss or data error poorly, making it less reliable. This paper focus on using two Kinect sensors and data fusion method to solve these problems. First, two Kinect sensors capture the motion data of the healthy arm of the hemiplegic patient; Second, merge the data using the method of Set-Membership-Filter (SMF); Then, mirror this motion data by the Middle-Plane; In the end, control the wearable robotic arm driving the patient's paralytic arm so that the patient can interactively and initiatively complete a variety of recovery actions prompted by computer with 3D animation games.

  5. Coresident sensor fusion and compression using the wavelet transform

    SciTech Connect

    Yocky, D.A.

    1996-03-11

    Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.

  6. Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.

    PubMed

    La, Hung Manh; Sheng, Weihua

    2013-04-01

    In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.

  7. Diagnostics and data fusion of robotic sensors

    SciTech Connect

    Dhar, M.; Bardsley, S; Cowper, L.; Hamm, R.; Jammu, V.; Wagner, J.

    1996-12-31

    Robotic systems for remediation of hazardous waste sites must be highly reliable to avoid equipment failures and subsequent possible exposure of personnel to hazardous environments. Safe, efficient cleanup operations also require accurate, complete knowledge of the task space. This paper presents progress made on a 18 month program to meet these needs. To enhance robot reliability, a conceptual design of a monitoring and diagnostic system is being developed to predict the onset of mechanical failure modes, provide maximum lead time to make operational changes or repairs, and minimize the occurrence of on-site breakdowns. To ensure safe operation, a comprehensive software package is being developed that will fuse data from multiple surface mapping sensors and poses so as to reduce the error effects in individual data points and provide accurate 3-D maps of a work space.

  8. Sensor fusion by pseudo information measure: a mobile robot application.

    PubMed

    Asharif, Mohammad Reza; Moshiri, Behzad; HoseinNezhad, Reza

    2002-07-01

    In any autonomous mobile robot, one of the most important issues to be designed and implemented is environment perception. In this paper, a new approach is formulated in order to perform sensory data integration for generation of an occupancy grid map of the environment. This method is an extended version of the Bayesian fusion method for independent sources of information. The performance of the proposed method of fusion and its sensitivity are discussed. Map building simulation for a cylindrical robot with eight ultrasonic sensors and mapping implementation for a Khepera robot have been separately tried in simulation and experimental works. A new neural structure is introduced for conversion of proximity data that are given by Khepera IR sensors to occupancy probabilities. Path planning experiments have also been applied to the resulting maps. For each map, two factors are considered and calculated: the fitness and the augmented occupancy of the map with respect to the ideal map. The length and the least distance to obstacles were the other two factors that were calculated for the routes that are resulted by path planning experiments. Experimental and simulation results show that by using the new fusion formulas, more informative maps of the environment are obtained. By these maps more appropriate routes could be achieved. Actually, there is a tradeoff between the length of the resulting routes and their safety and by choosing the proper fusion function, this tradeoff is suitably tuned for different map building applications.

  9. Use of data fusion to optimize contaminant transport predictions

    SciTech Connect

    Eeckhout, E. van

    1997-10-01

    The original data fusion workstation, as envisioned by Coleman Research Corp., was constructed under funding from DOE (EM-50) in the early 1990s. The intent was to demonstrate the viability of fusion and analysis of data from various types of sensors for waste site characterization, but primarily geophysical. This overall concept changed over time and evolved more towards hydrogeological (groundwater) data fusion after some initial geophysical fusion work focused at Coleman. This initial geophysical fusion platform was tested at Hanford and Fernald, and the later hydrogeological fusion work has been demonstrated at Pantex, Savannah River, the US Army Letterkenny Depot, a DoD Massachusetts site and a DoD California site. The hydrogeologic data fusion package has been spun off to a company named Fusion and Control Technology, Inc. This package is called the Hydrological Fusion And Control Tool (Hydro-FACT) and is being sold as a product that links with the software package, MS-VMS (MODFLOW-SURFACT Visual Modeling System), sold by HydroGeoLogic, Inc. MODFLOW is a USGS development, and is in the public domain. Since the government paid for the data fusion development at Coleman, the government and their contractors have access to the data fusion technology in this hydrogeologic package for certain computer platforms, but would probably have to hire FACT (Fusion and Control Technology, Inc.,) and/or HydroGeoLogic for some level of software and services. Further discussion in this report will concentrate on the hydrogeologic fusion module that is being sold as Hydro-FACT, which can be linked with MS-VMS.

  10. Dynamic Bayes net approach to multimodal sensor fusion

    NASA Astrophysics Data System (ADS)

    Singhal, Amit; Brown, Christopher R.

    1997-09-01

    Autonomous mobile robots rely on multiple sensors to perform a varied number of tasks in a given environment. Different tasks may need different sensors to estimate different subsets of world state. Also, different sensors can cooperate in discovering common subsets of world state. This paper presents a new approach to multimodal sensor fusion using dynamic Bayesian networks and an occupancy grid. The environment in which the robot operates is represented with an occupancy grid. This occupancy grid is asynchronously updated using probabilistic data obtained from multiple sensors and combined using Bayesian networks. Each cell in the occupancy grid stores multiple probability density functions representing combined evidence for the identity, location and properties of objects in the world. The occupancy grid also contains probabilistic representations for moving objects. Bayes nets allow information from one modality to provide cues for interpreting the output of sensors in other modalities. Establishing correlations or associations between sensor readings or interpretations leads to learning the conditional relationships between them. Thus bottoms-up, reflexive, or even accidentally-obtained information can provide tops-down cues for other sensing strategies. We present early results obtained for a mobile robot navigation task.

  11. Spectral photoplethysmographic imaging sensor fusion for enhanced heart rate detection

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Clausi, David A.; Wong, Alexander

    2016-03-01

    Continuous heart rate monitoring can provide important context for quantitative clinical assessment in scenarios such as long-term health monitoring and disability prevention. Photoplethysmographic imaging (PPGI) systems are particularly useful for such monitoring scenarios as contact-based devices pose problems related to comfort and mobility. Each pixel can be regarded as a virtual PPG sensor, thus enabling simultaneous measurements of multiple skin sites. Existing PPGI systems analyze temporal PPGI sensor uctuations related to hemodynamic pulsations across a region of interest to extract the blood pulse signal. However, due to spatially varying optical properties of the skin, the blood pulse signal may not be consistent across all PPGI sensors, leading to inaccurate heart rate monitoring. To increase the hemodynamic signal-to-noise ratio (SNR), we propose a novel spectral PPGI sensor fusion method for enhanced estimation of the true blood pulse signal. Motivated by the observation that PPGI sensors with high hemodynamic SNR exhibit a spectral energy peak at the heart rate frequency, an entropy-based fusion model was formulated to combine PPGI sensors based on the sensors' spectral energy distribution. The optical PPGI device comprised a near infrared (NIR) sensitive camera and an 850 nm LED. Spatially uniform irradiance was achieved by placing optical elements along the LED beam, providing consistent illumination across the skin area. Dual-mode temporally coded illumination was used to negate the temporal effect of ambient illumination. Experimental results show that the spectrally weighted PPGI method can accurately and consistently extract heart rate information where traditional region-based averaging fails.

  12. Optical associative memories for sensor fusion

    NASA Astrophysics Data System (ADS)

    Ralston, Lynda M.; Yoepp, John H.; Bardos, Andrew M.

    1992-08-01

    Modern military mission scenarios require very efficient access to multiple, large databases. Static `reference' databases and highly volatile databases which contain intelligence from sensors and other sources must be processed, cross referenced, and correlated. An architecture has been developed for a content addressable (associative) optical memory system. The system exploits the parallel access capabilities of optical disk memories to provide keyword correlation of free form text or structured databases within one revolution of the disk. The system consists of an optical disk drive augmented with an optical correlator and related electronics and software. The search string (keyword) is loaded into a spatial light modulator and optical matched filtering provides massively parallel readout to locate the desired data patterns on the disk. A digital degree-of-match (DOM) word is generated for each sector on the disk. Post processing based in digital electronics and software performs fuzzy computations to combine the DOMs for the current and previous keywords enabling the system to efficiently perform multi-step, content-based searches of the disk. Data stored in the best matching sectors is retrieved during the next revolution of the disk using the drive's standard read mechanism. The sustained processing rate of the optical correlator is 71 gigabits per second.

  13. Efficient Method for Optimizing Placement of Sensors

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Vatan, Farrokh

    2009-01-01

    A computationally efficient method has been developed to enable optimization of the placement of sensors for the purpose of diagnosis of a complex engineering system (e.g., an aircraft or spacecraft). The method can be used both in (1) designing a sensor system in which the number and positions of sensors are initially not known and must be determined and (2) adding sensors to a pre-existing system to increase the diagnostic capability. The optimal-sensor-placement problem can be summarized as involving the following concepts, issues, and subproblems: a) Degree of Diagnosability - This is a concept for characterizing the set of faults that can be discriminated by use of a given set of sensors. b) Minimal Sensor Set - The idea is one of finding a minimal set of sensors that guarantees a specific degree of diagnosability. c) Minimal-Cost Sensors - In a case in which different sensors are assigned with different costs, it is desired to choose the least costly set of sensors that affords a specific degree of diagnosability.

  14. Fusion of multiple sensor imagery based on target motion characteristics

    NASA Astrophysics Data System (ADS)

    Tsao, Tien-Ren; Libert, John M.

    1991-08-01

    Fusion of multiple sensor imagery is an effective approach to clutter rejection in target detection and recognition. However, image registration at the pixel level and even at the feature level poses significant problems. We are developing a neural network computational schemes that will permit fusion of multiple sensor information according to target motion characteristics. One such scheme implements the Law of Common Fate to differentiate moving targets from dynamic background clutter on the basis of homogeneous velocity; spatiotemporal frequency analysis is applied to time-varying sensor imagery to detect and locate individual moving objects. Another computational scheme applies Gabor filters and differential Gabor filters to calculate image flow and then employs a Lie group-based neural network to interpret the 2D image flow in terms of 3D motion, and to delineate regions of homogeneous 3D motion; the motion-keyed regions may be correlated among sensor types to associate multiattribute information with the individual targets in the scene and to exclude clutter.

  15. Inertial Sensor Error Reduction through Calibration and Sensor Fusion.

    PubMed

    Lambrecht, Stefan; Nogueira, Samuel L; Bortole, Magdo; Siqueira, Adriano A G; Terra, Marco H; Rocon, Eduardo; Pons, José L

    2016-02-17

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking.

  16. Inertial Sensor Error Reduction through Calibration and Sensor Fusion

    PubMed Central

    Lambrecht, Stefan; Nogueira, Samuel L.; Bortole, Magdo; Siqueira, Adriano A. G.; Terra, Marco H.; Rocon, Eduardo; Pons, José L.

    2016-01-01

    This paper presents the comparison between cooperative and local Kalman Filters (KF) for estimating the absolute segment angle, under two calibration conditions. A simplified calibration, that can be replicated in most laboratories; and a complex calibration, similar to that applied by commercial vendors. The cooperative filters use information from either all inertial sensors attached to the body, Matricial KF; or use information from the inertial sensors and the potentiometers of an exoskeleton, Markovian KF. A one minute walking trial of a subject walking with a 6-DoF exoskeleton was used to assess the absolute segment angle of the trunk, thigh, shank, and foot. The results indicate that regardless of the segment and filter applied, the more complex calibration always results in a significantly better performance compared to the simplified calibration. The interaction between filter and calibration suggests that when the quality of the calibration is unknown the Markovian KF is recommended. Applying the complex calibration, the Matricial and Markovian KF perform similarly, with average RMSE below 1.22 degrees. Cooperative KFs perform better or at least equally good as Local KF, we therefore recommend to use cooperative KFs instead of local KFs for control or analysis of walking. PMID:26901198

  17. Sensor fusion using neural network in the robotic welding

    SciTech Connect

    Ohshima, Kenji; Yabe, Masaaki; Akita, Kazuya; Kugai, Katsuya; Yamane, Satoshi; Kubota, Takefumi

    1995-12-31

    It is important to realize intelligent welding robots to obtain a good quality of the welding results. For this purpose, it is required to detect the torch height, the torch attitude, the deviation from the center of the gap. In order to simultaneously detect those, the authors propose the sensor fusion by using the neural network, i.e., the information concerning the welding torch is detected by using both the welding current and the welding voltage. First, the authors deal with the welding phenomena as the melting phenomena in the electrode wire of the MIG welding and the CO{sub 2} short circuiting welding. Next, the training data of the neutral networks are made from the numerical simulations. The neuro arc sensor is trained so as to get the desired performance of the sensor. By using it, the seam tracking is carried out in the T-joint.

  18. Neural network fusion and inversion model for NDIR sensor measurement

    NASA Astrophysics Data System (ADS)

    Cieszczyk, Sławomir; Komada, Paweł

    2015-12-01

    This article presents the problem of the impact of environmental disturbances on the determination of information from measurements. As an example, NDIR sensor is studied, which can measure industrial or environmental gases of varying temperature. The issue of changes of influence quantities value appears in many industrial measurements. Developing of appropriate algorithms resistant to conditions changes is key problem. In the resulting mathematical model of inverse problem additional input variables appears. Due to the difficulties in the mathematical description of inverse model neural networks have been applied. They do not require initial assumptions about the structure of the created model. They provide correction of sensor non-linearity as well as correction of influence of interfering quantity. The analyzed issue requires additional measurement of disturbing quantity and its connection with measurement of primary quantity. Combining this information with the use of neural networks belongs to the class of sensor fusion algorithm.

  19. Optimization Strategies for Sensor and Actuator Placement

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Kincaid, Rex K.

    1999-01-01

    This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.

  20. Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.

    PubMed

    Jung, Hyung-Sup; Park, Sung-Whan

    2014-12-18

    Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.

  1. A Survey on Optimal Signal Processing Techniques Applied to Improve the Performance of Mechanical Sensors in Automotive Applications

    PubMed Central

    Hernandez, Wilmar

    2007-01-01

    In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.

  2. Discrete Kalman Filter based Sensor Fusion for Robust Accessibility Interfaces

    NASA Astrophysics Data System (ADS)

    Ghersi, I.; Mariño, M.; Miralles, M. T.

    2016-04-01

    Human-machine interfaces have evolved, benefiting from the growing access to devices with superior, embedded signal-processing capabilities, as well as through new sensors that allow the estimation of movements and gestures, resulting in increasingly intuitive interfaces. In this context, sensor fusion for the estimation of the spatial orientation of body segments allows to achieve more robust solutions, overcoming specific disadvantages derived from the use of isolated sensors, such as the sensitivity of magnetic-field sensors to external influences, when used in uncontrolled environments. In this work, a method for the combination of image-processing data and angular-velocity registers from a 3D MEMS gyroscope, through a Discrete-time Kalman Filter, is proposed and deployed as an alternate user interface for mobile devices, in which an on-screen pointer is controlled with head movements. Results concerning general performance of the method are presented, as well as a comparative analysis, under a dedicated test application, with results from a previous version of this system, in which the relative-orientation information was acquired directly from MEMS sensors (3D magnetometer-accelerometer). These results show an improved response for this new version of the pointer, both in terms of precision and response time, while keeping many of the benefits that were highlighted for its predecessor, giving place to a complementary method for signal acquisition that can be used as an alternative-input device, as well as for accessibility solutions.

  3. Distributed fusion and automated sensor tasking in ISR systems

    NASA Astrophysics Data System (ADS)

    Preden, Jurgo; Pahtma, Raido; Astapov, Sergei; Ehala, Johannes; Riid, Andri; Motus, Leo

    2014-06-01

    Modern Intelligence, Surveillance and Reconnaissance (ISR) systems are increasingly being assembled from autonomous systems, so the resulting ISR system is a System of Systems (SoS). In order to take full advantage of the capabilities of the ISR SoS, the architecture and the design of these SoS should be able to facilitate the benefits inherent in a SoS approach - high resilience, higher level of adaptability and higher diversity, enabling on-demand system composition. The tasks performed by ISR SoS can well go beyond basic data acquisition, conditioning and communication as data processing can be easily integrated in the SoS. Such an ISR SoS can perform data fusion, classification and tracking (and conditional sensor tasking for additional data acquisition), these are extremely challenging tasks in this context, especially if the fusion is performed in a distributed manner. Our premise for the ISR SoS design and deployment is that the system is not designed as a complete system, where the capabilities of individual data providers are considered and the interaction paths, including communication channel capabilities, are specified at design time. Instead, we assume a loosely coupled SoS, where the data needs for a specific fusion task are described at a high level at design time and data providers (i.e., sensor systems) required for a specific fusion task are discovered dynamically at run time, the selection criteria for the data providers being the type and properties of data that can be provided by the specific data provider. The paper describes some of the aspects of a distributed ISR SoS design and implementation, bringing examples on both architectural design as well as on algorithm implementations.

  4. Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Williams, Bradford D.; Talele, Sunjay E.; Amphay, Sengvieng A.

    1995-07-01

    Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.

  5. An alternative sensor fusion method for object orientation using low-cost MEMS inertial sensors

    NASA Astrophysics Data System (ADS)

    Bouffard, Joshua L.

    This thesis develops an alternative sensor fusion approach for object orientation using low-cost MEMS inertial sensors. The alternative approach focuses on the unique challenges of small UAVs. Such challenges include the vibrational induced noise onto the accelerometer and bias offset errors of the rate gyroscope. To overcome these challenges, a sensor fusion algorithm combines the measured data from the accelerometer and rate gyroscope to achieve a single output free from vibrational noise and bias offset errors. One of the most prevalent sensor fusion algorithms used for orientation estimation is the Extended Kalman filter (EKF). The EKF filter performs the fusion process by first creating the process model using the nonlinear equations of motion and then establishing a measurement model. With the process and measurement models established, the filter operates by propagating the mean and covariance of the states through time. The success of EKF relies on the ability to establish a representative process and measurement model of the system. In most applications, the EKF measurement model utilizes the accelerometer and GPS-derived accelerations to determine an estimate of the orientation. However, if the GPS-derived accelerations are not available then the measurement model becomes less reliable when subjected to harsh vibrational environments. This situation led to the alternative approach, which focuses on the correlation between the rate gyroscope and accelerometer-derived angle. The correlation between the two sensors then determines how much the algorithm will use one sensor over the other. The result is a measurement that does not suffer from the vibrational noise or from bias offset errors.

  6. Optimal control theory applied to fusion plasma thermal stabilization

    SciTech Connect

    Sager, G.; Miley, G.; Maya, I.

    1985-01-01

    Many authors have investigated stability characteristics and performance of various burn control schemes. The work presented here represents the first application of optimal control theory to the problem of fusion plasma thermal stabilization. The objectives of this initial investigation were to develop analysis methods, demonstrate tractability, and present some preliminary results of optimal control theory in burn control research.

  7. Obstacle Detection System Involving Fusion of Multiple Sensor Technologies

    NASA Astrophysics Data System (ADS)

    Giannì, C.; Balsi, M.; Esposito, S.; Fallavollita, P.

    2017-08-01

    Obstacle detection is a fundamental task for Unmanned Aerial Vehicles (UAV) as a part of a Sense and Avoid system. In this study, we present a method of multi-sensor obstacle detection that demonstrated good results on different kind of obstacles. This method can be implemented on low-cost platforms involving a DSP or small FPGA. In this paper, we also present a study on the typical targets that can be tough to detect because of their characteristics of reflectivity, form factor, heterogeneity and show how data fusion can often overcome the limitations of each technology.

  8. Optimization of Sensor Monitoring Strategies for Emissions

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; Laird, C. D.; Downey, N.; Baker Hebert, L.; Blewitt, D.; Smith, G. R.

    2016-12-01

    Continuous or regularly scheduled monitoring has the potential to quickly identify changes in air quality. However, even with low-cost sensors, only a limited number of sensors can be placed to monitor airborne pollutants. The physical placement of these sensors and the sensor technology used can have a large impact on the performance of a monitoring strategy. Furthermore, sensors can be placed for different objectives, including maximum coverage, minimum time to detection or exposure, or to quantify emissions. Different objectives may require different monitoring strategies, which need to be evaluated by stakeholders before sensors are placed in the field. In this presentation, we outline methods to enhance ambient detection programs through optimal design of the monitoring strategy. These methods integrate atmospheric transport models with sensor characteristics, including fixed and mobile sensors, sensor cost and failure rate. The methods use site specific pre-computed scenarios which capture differences in meteorology, terrain, concentration averaging times, gas concentration, and emission characteristics. The pre-computed scenarios become input to a mixed-integer, stochastic programming problem that solves for sensor locations and types that maximize the effectiveness of the detection program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while

  10. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  11. Engineering of Sensor Network Structure for Dependable Fusion

    DTIC Science & Technology

    2014-08-15

    design a novel sensor architecture that partitioned the overall design into two separate but interacting design spaces, (1) Information Space (IS... Sleep Mode, IEEE 10th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT’12). 17-MAY-12, . : , R

  12. Sensor fusion III; Proceedings of the Meeting, Orlando, FL, Apr. 19, 20, 1990

    SciTech Connect

    Harney, R.C.

    1990-01-01

    Multisensor image segmentation algorithms, scene descriptions from radar and infrared images, and combination of evidences with dependency information in distributed sensor systems are discussed. Preference voting for sensor fusion, a simulation-based test bed for data-association algorithms, receiver operating characteristics for various sensor fusion schemes, and adaptive multisensor change detection are analyzed. A tree-structured sensor fusion architecture for distributed sensor networks and performance modeling of multisensor networks is covered, along with components for sensor fusion systems, including high-precision line-of-sight stabilization for a 1-m space telescope with onboard image processing. The continuous mode of operation of a space-time light modulator is addressed, as well as an embedded knowledge-based system for automatic target recognition and multisensor data fusion for mine detection.

  13. Optimization of the coplanar interdigital capacitive sensor

    NASA Astrophysics Data System (ADS)

    Huang, Yunzhi; Zhan, Zheng; Bowler, Nicola

    2017-02-01

    Interdigital capacitive sensors are applied in nondestructive testing and material property characterization of low-conductivity materials. The sensor performance is typically described based on the penetration depth of the electric field into the sample material, the sensor signal strength and its sensitivity. These factors all depend on the geometry and material properties of the sensor and sample. In this paper, a detailed analysis is provided, through finite element simulations, of the ways in which the sensor's geometrical parameters affect its performance. The geometrical parameters include the number of digits forming the interdigital electrodes and the ratio of digit width to their separation. In addition, the influence of the presence or absence of a metal backplane on the sample is analyzed. Further, the effects of sensor substrate thickness and material on signal strength are studied. The results of the analysis show that it is necessary to take into account a trade-off between the desired sensitivity and penetration depth when designing the sensor. Parametric equations are presented to assist the sensor designer or nondestructive evaluation specialist in optimizing the design of a capacitive sensor.

  14. Wireless sensor information fusion for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Ou, Jinping; Li, Hongwei

    2003-04-01

    The process of implementing a damage detection strategy for engineering systems is often referred to as structural health monitoring (SHM). And Structural Health Monitoring is very important for large structures like suspension- and cable-stayed bridges, towers, offshore platforms and so on. Some advance technologies for infrastructure health monitoring have been caused much more attentions, in which the wireless sensor network (WSN) is recently received special interests. The WSN would have lower capital and installation costs as well as ensure more reliability in the communication of sensor measurements. However, in the context of untethered nodes, the finite energy budget is a primary design constraint. Therefore, one wants to process data as much as possible inside the network to reduce the number of bits transmitted, particularly over longer distances. In this paper, a WSN is proposed for health monitoring of the offshore platform, and a laboratory prototype was designed and developed to demonstrate the feasibility and validity of the proposed WSN. In the laboratory prototype, wireless sensor nodes were deployed on a model of offshore platform, a Wireless Sensor Local Area Network (WSLAN) transfers the simulated data among personal computer and microsensor nodes peripherals without cables. To minimize the energy consumption, algorithms for fusing the acceleration, temp and magnetic sensors on a single node are being developed. And based on fusing the data from local nodes, the current state of structure health was determined. In our deployment, we using UC Berkeley motes as the wireless sensor nodes to examine many of the issues relating to their usage and our information fusion algorithm.

  15. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    PubMed Central

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  16. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.

    PubMed

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-07-19

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  17. Context-aided sensor fusion for enhanced urban navigation.

    PubMed

    Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María

    2012-12-06

     The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments.

  18. Context-Aided Sensor Fusion for Enhanced Urban Navigation

    PubMed Central

    Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María

    2012-01-01

    The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. PMID:23223080

  19. Multi-sensor fusion techniques for state estimation of micro air vehicles

    NASA Astrophysics Data System (ADS)

    Donavanik, Daniel; Hardt-Stremayr, Alexander; Gremillion, Gregory; Weiss, Stephan; Nothwang, William

    2016-05-01

    Aggressive flight of micro air vehicles (MAVs) in unstructured, GPS-denied environments poses unique challenges for estimation of vehicle pose and velocity due to the noise, delay, and drift in individual sensor measurements. Maneuvering flight at speeds in excess of 5 m/s poses additional challenges even for active range sensors; in the case of LIDAR, an assembled scan of the vehicles environment will in most cases be obsolete by the time it is processed. Multi-sensor fusion techniques which combine inertial measurements with passive vision techniques and/or LIDAR have achieved breakthroughs in the ability to maintain accurate state estimates without the use of external positioning sensors. In this paper, we survey algorithmic approaches to exploiting sensors with a wide range of nonlinear dynamics using filter and bundle-adjustment based approaches for state estimation and optimal control. From this foundation, we propose a biologically-inspired framework for incorporating the human operator in the loop as a privileged sensor in a combined human/autonomy paradigm.

  20. An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox.

    PubMed

    Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng

    2017-02-21

    A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment.

  1. An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox

    PubMed Central

    Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng

    2017-01-01

    A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment. PMID:28230767

  2. Sensor Integration, Management and Data Fusion Concepts in a Naval Command and Control Perspective

    DTIC Science & Technology

    2016-06-07

    34Information Understanding: Integrating Data Fusion and Data Mining Processes", Proceedings of the 1998 IEEE International Symposium on Circuits and...Image Cover Sheet CLASSIFICATION SYSTEM NUMBER 509463 UNCLASSIFIED llllllllllllllllllllllllllllllllllllllll TITLE SENSOR INTEGRATION , MANAGEMENT...Distribution illirnitee SENSOR INTEGRATION , MANAGEMENT AND DATA FUSION CONCEPTS IN ANA VAL COMMAND AND CONTROL PERSPECTIVE by J. Roy and E. Bosse

  3. Data Strategies to Support Automated Multi-Sensor Data Fusion in a Service Oriented Architecture

    DTIC Science & Technology

    2008-06-01

    is unlimited DATA STRATEGIES TO SUPPORT AUTOMATED MULTI- SENSOR DATA FUSION IN A SERVICE ORIENTED ARCHITECTURE by Kurt J. Rothenhaus...SUBTITLE: Data Strategies to Support Automated Multi- Sensor Data Fusion in a Service Oriented Architecture 6. AUTHOR: CDR Kurt Rothenhaus 5...end-to-end solutions promised by SOA technologies. Software architectural patterns in conjunction with broad data strategies are required to harness

  4. Fusion Research of Electrical Tomography with Other Sensors for Two-phase Flow Measurement

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Yang, W. Q.

    2012-01-01

    The two-phase flow widely exists in the nature and industrial processes. The measurement of two-phase flows, including gas/solids, gas/liquid and liquid/liquid flows, is still challenging. Fusions of electrical tomography with conventional sensors provide possibilities to improve two-phase flow accurate measurement. In this paper, fusions of (1) electrical resistance tomography (ERT) with electromagnetic (EM) flowmeter, (2) electrical capacitance tomography (ECT) with ERT and (3) ECT with electrostatic sensor are introduced. Some research results of fusion methods are presented and discussed. This paper can provide the theoretical support for the multi-sensor fusion for two-phase flow measurement.

  5. Based on Multi-sensor Information Fusion Algorithm of TPMS Research

    NASA Astrophysics Data System (ADS)

    Yulan, Zhou; Yanhong, Zang; Yahong, Lin

    In the paper are presented algorithms of TPMS (Tire Pressure Monitoring System) based on multi-sensor information fusion. A Unified mathematical models of information fusion are constructed and three algorithms are used to deal with, which include algorithm based on Bayesian, algorithm based on the relative distance (an improved algorithm of bayesian theory of evidence), algorithm based on multi-sensor weighted fusion. The calculating results shows that the algorithm based on d-s evidence theory of multisensor fusion method better than the algorithm the based on information fusion method or the bayesian method.

  6. Optimal sensor placement in structural health monitoring using discrete optimization

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Büyüköztürk, Oral

    2015-12-01

    The objective of optimal sensor placement (OSP) is to obtain a sensor layout that gives as much information of the dynamic system as possible in structural health monitoring (SHM). The process of OSP can be formulated as a discrete minimization (or maximization) problem with the sensor locations as the design variables, conditional on the constraint of a given sensor number. In this paper, we propose a discrete optimization scheme based on the artificial bee colony algorithm to solve the OSP problem after first transforming it into an integer optimization problem. A modal assurance criterion-oriented objective function is investigated to measure the utility of a sensor configuration in the optimization process based on the modal characteristics of a reduced order model. The reduced order model is obtained using an iterated improved reduced system technique. The constraint is handled by a penalty term added to the objective function. Three examples, including a 27 bar truss bridge, a 21-storey building at the MIT campus and the 610 m high Canton Tower, are investigated to test the applicability of the proposed algorithm to OSP. In addition, the proposed OSP algorithm is experimentally validated on a physical laboratory structure which is a three-story two-bay steel frame instrumented with triaxial accelerometers. Results indicate that the proposed method is efficient and can be potentially used in OSP in practical SHM.

  7. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  8. An optimized hydrogen target for muon catalyzed fusion

    NASA Astrophysics Data System (ADS)

    Gheisari, R.

    2011-04-01

    This paper deals with the optimization of the processes involved in muon catalyzed fusion. Muon catalyzed fusion ( μCF) is studied in all layers of the solid hydrogen structure H/0.1%T⊕D2⊕HD. The layer H/ T acts as an emitter source of energetic tμ atoms, due to the so-called Ramsauer-Townsend effect. These tμ atoms are slowed down in the second layer (degrader) and are forced to take place nuclear fusion in HD. The degrader affects time evolution of tμ atomic beam. This effect has not been considered until now in μCF-multilayered targets. Due to muon cycling and this effect, considerable reactions occur in the degrader. In our calculations, it is shown that the fusion yield equals 180±1.5. It is possible to separate events that overlap in time.

  9. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  10. Optimizing Retransmission Threshold in Wireless Sensor Networks.

    PubMed

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-05-10

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is O n Δ · max 1 ≤ i ≤ n { u i } , where u i is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based ( 1 + p m i n ) -approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O ( 1 ) -approximation algorithm with time complexity O ( 1 ) is proposed. Experimental results show that the proposed algorithms have better performance.

  11. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  12. Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.

    PubMed

    Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z

    2012-07-01

    Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.

  13. An Approach to Optimize the Fusion Coefficients for Land Cover Information Enhancement with Multisensor Data

    NASA Astrophysics Data System (ADS)

    Garg, Akanksha; Brodu, Nicolas; Yahia, Hussein; Singh, Dharmendra

    2016-04-01

    This paper explores a novel data fusion method with the application of Machine Learning approach for optimal weighted fusion of multisensor data. It will help to get the maximum information of any land cover. Considerable amount of research work has been carried out on multisensor data fusion but getting an optimal fusion for enhancement of land cover information using random weights is still ambiguous. Therefore, there is a need of such land cover monitoring system which can provide the maximum information of the land cover, generally which is not possible with the help of single sensor data. There is a necessity to develop such techniques by which information of multisensor data can be utilized optimally. Machine learning is one of the best way to optimize this type of information. So, in this paper, the weights of each sensor data have been critically analyzed which is required for the fusion, and observed that weights are quite sensitive in fusion. Therefore, different combinations of weights have been tested exhaustively in the direction to develop a relationship between weights and classification accuracy of the fused data. This relationship can be optimized through machine learning techniques like SVM (Support Vector Machine). In the present study, this experiment has been carried out for PALSAR (Phased Array L-Band Synthetic Aperture RADAR) and MODIS (Moderate Resolution Imaging Spectroradiometer) data. PALSAR is a fully polarimetric data with HH, HV and VV polarizations at good spatial resolution (25m), and NDVI (Normalized Difference Vegetation Index) is a good indicator of vegetation, utilizing different bands (Red and NIR) of freely available MODIS data at 250m resolution. First of all, resolution of NDVI has been enhanced from 250m to 25m (10 times) using modified DWT (Modified Discrete Wavelet Transform) to bring it on the same scale as that of PALSAR. Now, different polarized PALSAR data (HH, HV, VV) have been fused with resolution enhanced NDVI

  14. A Reliability-Based Method to Sensor Data Fusion

    PubMed Central

    Zhuang, Miaoyan; Xie, Chunhe

    2017-01-01

    Multi-sensor data fusion technology based on Dempster–Shafer evidence theory is widely applied in many fields. However, how to determine basic belief assignment (BBA) is still an open issue. The existing BBA methods pay more attention to the uncertainty of information, but do not simultaneously consider the reliability of information sources. Real-world information is not only uncertain, but also partially reliable. Thus, uncertainty and partial reliability are strongly associated with each other. To take into account this fact, a new method to represent BBAs along with their associated reliabilities is proposed in this paper, which is named reliability-based BBA. Several examples are carried out to show the validity of the proposed method. PMID:28678179

  15. A Reliability-Based Method to Sensor Data Fusion.

    PubMed

    Jiang, Wen; Zhuang, Miaoyan; Xie, Chunhe

    2017-07-05

    Multi-sensor data fusion technology based on Dempster-Shafer evidence theory is widely applied in many fields. However, how to determine basic belief assignment (BBA) is still an open issue. The existing BBA methods pay more attention to the uncertainty of information, but do not simultaneously consider the reliability of information sources. Real-world information is not only uncertain, but also partially reliable. Thus, uncertainty and partial reliability are strongly associated with each other. To take into account this fact, a new method to represent BBAs along with their associated reliabilities is proposed in this paper, which is named reliability-based BBA. Several examples are carried out to show the validity of the proposed method.

  16. Imaging sensor fusion and enhanced vision for helicopter landing operations

    NASA Astrophysics Data System (ADS)

    Hebel, Marcus; Bers, Karlheinz; Jäger, Klaus

    2006-05-01

    An automatic target recognition system has been assembled and tested at the Research Institute for Optronics and Pattern Recognition in Germany over the last years. Its multisensorial design comprises off-the-shelf components: an FPA infrared camera, a scanning laser radar und an inertial measurement unit. In the paper we describe several possibilities for the use of this multisensor equipment during helicopter missions. We discuss suitable data processing methods, for instance the automatic time synchronization of different imaging sensors, the pixel-based data fusion and the incorporation of collateral information. The results are visualized in an appropriate way to present them on a cockpit display. We also show how our system can act as a landing aid for pilots within brownout conditions (dust clouds caused by the landing helicopter).

  17. Improvement of KinectTM Sensor Capabilities by Fusion with Laser Sensing Data Using Octree

    PubMed Central

    Chávez, Alfredo; Karstoft, Henrik

    2012-01-01

    To enhance sensor capabilities, sensor data readings from different modalities must be fused. The main contribution of this paper is to present a sensor data fusion approach that can reduce KinectTM sensor limitations. This approach involves combining laser with KinectTM sensors. Sensor data is modelled in a 3D environment based on octrees using a probabilistic occupancy estimation. The Bayesian method, which takes into account the uncertainty inherent in the sensor measurements, is used to fuse the sensor information and update the 3D octree map. The sensor fusion yields a significant increase of the field of view of the KinectTM sensor that can be used for robot tasks. PMID:22666006

  18. Improvement of KinectTM sensor capabilities by fusion with laser sensing data using octree.

    PubMed

    Chávez, Alfredo; Karstoft, Henrik

    2012-01-01

    To enhance sensor capabilities, sensor data readings from different modalities must be fused. The main contribution of this paper is to present a sensor data fusion approach that can reduce Kinect(TM) sensor limitations. This approach involves combining laser with Kinect(TM) sensors. Sensor data is modelled in a 3D environment based on octrees using a probabilistic occupancy estimation. The Bayesian method, which takes into account the uncertainty inherent in the sensor measurements, is used to fuse the sensor information and update the 3D octree map. The sensor fusion yields a significant increase of the field of view of the Kinect(TM) sensor that can be used for robot tasks.

  19. Optimal Sensor Selection for Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  20. Knowledge assistant: A sensor fusion framework for robotic environmental characterization

    SciTech Connect

    Feddema, J.T.; Rivera, J.J.; Tucker, S.D.

    1996-12-01

    A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and post analysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neural network, and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g. estimated dimensions, weight, material composition, etc.) are displayed in the world model. This paper highlights the major components of this system.

  1. Information fusion based optimal control for large civil aircraft system.

    PubMed

    Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen

    2015-03-01

    Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Sensor-knowledge-command fusion paradigm for man/machine systems

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Schenker, Paul S.; Park, Jun

    1991-01-01

    Sensing-knowledge-command (SKC) fusion is presented as a fundamental paradigm of implementing cooperative control for an advanced man-machine system. SKC fusion operates on the 'SKC fusion network,' which represents the connection between sensor data to commands through knowledge. Sensing, knowledge, and command of a human and a machine are tapped into the network to provide inputs, or stimuli, to the network. Such stimuli automatically invoke an SKC fusion process and generate a fused output for cooperative control. Once invoked by stimuli, the SKC fusion process forces the network to converge to a new equilibrium state through the network dynamics composed of data fusion, feature transformation, and constraint propagation. The SKC fusion process thus integrates redundant information, maintains network consistency, identifies faulty data and concepts, and specifies those concepts to be strengthened through sensor planning.

  3. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.

    1992-11-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  4. Application of Multi-Sensor Information Fusion Method Based on Rough Sets and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Xue, Jinxue; Wang, Guohu; Wang, Xiaoqiang; Cui, Fengkui

    In order to improve the precision and date processing speed of multi-sensor information fusion, a kind of multi-sensor data fusion process algorithm has been studied in this paper. First, based on rough set theory (RS) to attribute reduction the parameter set, we use the advantages of rough set theory in dealing with large amount of data to eliminate redundant information. Then, the data can be trained and classified by Support Vector Machine (SYM). Experimental results showed that this method can improve the speed and accuracy of multi-sensor fusion system.

  5. Fault tolerant multi-sensor fusion based on the information gain

    NASA Astrophysics Data System (ADS)

    Hage, Joelle Al; El Najjar, Maan E.; Pomorski, Denis

    2017-01-01

    In the last decade, multi-robot systems are used in several applications like for example, the army, the intervention areas presenting danger to human life, the management of natural disasters, the environmental monitoring, exploration and agriculture. The integrity of localization of the robots must be ensured in order to achieve their mission in the best conditions. Robots are equipped with proprioceptive (encoders, gyroscope) and exteroceptive sensors (Kinect). However, these sensors could be affected by various faults types that can be assimilated to erroneous measurements, bias, outliers, drifts,… In absence of a sensor fault diagnosis step, the integrity and the continuity of the localization are affected. In this work, we present a muti-sensors fusion approach with Fault Detection and Exclusion (FDE) based on the information theory. In this context, we are interested by the information gain given by an observation which may be relevant when dealing with the fault tolerance aspect. Moreover, threshold optimization based on the quantity of information given by a decision on the true hypothesis is highlighted.

  6. Belief Function Based Decision Fusion for Decentralized Target Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Wenyu; Zhang, Zhenjiang

    2015-08-19

    Decision fusion in sensor networks enables sensors to improve classification accuracy while reducing the energy consumption and bandwidth demand for data transmission. In this paper, we focus on the decentralized multi-class classification fusion problem in wireless sensor networks (WSNs) and a new simple but effective decision fusion rule based on belief function theory is proposed. Unlike existing belief function based decision fusion schemes, the proposed approach is compatible with any type of classifier because the basic belief assignments (BBAs) of each sensor are constructed on the basis of the classifier's training output confusion matrix and real-time observations. We also derive explicit global BBA in the fusion center under Dempster's combinational rule, making the decision making operation in the fusion center greatly simplified. Also, sending the whole BBA structure to the fusion center is avoided. Experimental results demonstrate that the proposed fusion rule has better performance in fusion accuracy compared with the naïve Bayes rule and weighted majority voting rule.

  7. Belief Function Based Decision Fusion for Decentralized Target Classification in Wireless Sensor Networks

    PubMed Central

    Zhang, Wenyu; Zhang, Zhenjiang

    2015-01-01

    Decision fusion in sensor networks enables sensors to improve classification accuracy while reducing the energy consumption and bandwidth demand for data transmission. In this paper, we focus on the decentralized multi-class classification fusion problem in wireless sensor networks (WSNs) and a new simple but effective decision fusion rule based on belief function theory is proposed. Unlike existing belief function based decision fusion schemes, the proposed approach is compatible with any type of classifier because the basic belief assignments (BBAs) of each sensor are constructed on the basis of the classifier’s training output confusion matrix and real-time observations. We also derive explicit global BBA in the fusion center under Dempster’s combinational rule, making the decision making operation in the fusion center greatly simplified. Also, sending the whole BBA structure to the fusion center is avoided. Experimental results demonstrate that the proposed fusion rule has better performance in fusion accuracy compared with the naïve Bayes rule and weighted majority voting rule. PMID:26295399

  8. Sensor fusion for the localisation of birds in flight

    NASA Astrophysics Data System (ADS)

    Millikin, Rhonda Lorraine

    Tracking and identification of birds in flight remains a goal of aviation safety worldwide and conservation in North America. Marine surveillance radar, tracking radar and more recently weather radar have been used to monitor mass movements of birds. The emphasis has been on prediction of migration fronts where thousands of birds follow weather patterns across a large geographic area. Microphones have been stationed over wide areas to receive calls of these birds and help catalogue the diversity of species comprising these migrations. A most critical feature of landbird migration is where the birds land to rest and feed. These habitats are not known and therefore cannot effectively be protected. For effective management of landbird migrants (nocturnal migrant birds), short-range flight behaviour (100--300 m above ground) is the critical air space to monitor. To ensure conservation efforts are focused on endangered species and species truly at risk, species of individual birds must be identified. Short-range monitoring of individual birds is also important for aviation safety. Up to 75% of bird-aircraft collisions occur within 500 ft (153 m) above the runway. Identification of each bird will help predict its flight path, a critical factor in the prevention of a collision. This thesis focuses on short-range identification of individual birds to localise birds in flight. This goal is achieved through fusing data from two sensor systems, radar and acoustic. This fusion provides more accurate tracking of birds in the lower airspace and allows for the identification of species of interest. In the fall of 1999, an experiment was conducted at Prince Edward Point, a southern projection of land on the north shore of Lake Ontario, to prove that the fusion of radar and acoustic sensors enhances the detection, location and tracking of nocturnal migrant birds. As these birds migrate at night, they are difficult to track visually. However, they are detectable with X

  9. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  10. An approach to optimal hyperspectral and multispectral signature and image fusion for detecting hidden targets on shorelines

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.

    2015-10-01

    Hyperspectral and multispectral imagery of shorelines collected from airborne and shipborne platforms are used following pushbroom imagery corrections using inertial motion motions units and augmented global positioning data and Kalman filtering. Corrected radiance or reflectance images are then used to optimize synthetic high spatial resolution spectral signatures resulting from an optimized data fusion process. The process demonstrated utilizes littoral zone features from imagery acquired in the Gulf of Mexico region. Shoreline imagery along the Banana River, Florida, is presented that utilizes a technique that makes use of numerically embedded targets in both higher spatial resolution multispectral images and lower spatial resolution hyperspectral imagery. The fusion process developed utilizes optimization procedures that include random selection of regions and pixels in the imagery, and minimizing the difference between the synthetic signatures and observed signatures. The optimized data fusion approach allows detection of spectral anomalies in the resolution enhanced data cubes. Spectral-spatial anomaly detection is demonstrated using numerically embedded line targets within actual imagery. The approach allows one to test spectral signature anomaly detection and to identify features and targets. The optimized data fusion techniques and software allows one to perform sensitivity analysis and optimization in the singular value decomposition model building process and the 2-D Butterworth cutoff frequency and order numerical selection process. The data fusion "synthetic imagery" forms a basis for spectral-spatial resolution enhancement for optimal band selection and remote sensing algorithm development within "spectral anomaly areas". Sensitivity analysis demonstrates the data fusion methodology is most sensitive to (a) the pixels and features used in the SVD model building process and (b) the 2-D Butterworth cutoff frequency optimized by application of K

  11. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  12. Designing Sensor Networks by a Generalized Highly Optimized Tolerance Model

    NASA Astrophysics Data System (ADS)

    Miyano, Takaya; Yamakoshi, Miyuki; Higashino, Sadanori; Tsutsui, Takako

    A variant of the highly optimized tolerance model is applied to a toy problem of bioterrorism to determine the optimal arrangement of hypothetical bio-sensors to avert epidemic outbreak. Nonlinear loss function is utilized in searching the optimal structure of the sensor network. The proposed method successfully averts disastrously large events, which can not be achieved by the original highly optimized tolerance model.

  13. Study of data fusion algorithms applied to unattended ground sensor network

    NASA Astrophysics Data System (ADS)

    Pannetier, B.; Moras, J.; Dezert, Jean; Sella, G.

    2014-06-01

    In this paper, data obtained from wireless unattended ground sensor network are used for tracking multiple ground targets (vehicles, pedestrians and animals) moving on and off the road network. The goal of the study is to evaluate several data fusion algorithms to select the best approach to establish the tactical situational awareness. The ground sensor network is composed of heterogeneous sensors (optronic, radar, seismic, acoustic, magnetic sensors) and data fusion nodes. The fusion nodes are small hardware platforms placed on the surveillance area that communicate together. In order to satisfy operational needs and the limited communication bandwidth between the nodes, we study several data fusion algorithms to track and classify targets in real time. A multiple targets tracking (MTT) algorithm is integrated in each data fusion node taking into account embedded constraint. The choice of the MTT algorithm is motivated by the limit of the chosen technology. In the fusion nodes, the distributed MTT algorithm exploits the road network information in order to constrain the multiple dynamic models. Then, a variable structure interacting multiple model (VS-IMM) is adapted with the road network topology. This algorithm is well-known in centralized architecture, but it implies a modification of other data fusion algorithms to preserve the performances of the tracking under constraints. Based on such VS-IMM MTT algorithm, we adapt classical data fusion techniques to make it working in three architectures: centralized, distributed and hierarchical. The sensors measurements are considered asynchronous, but the fusion steps are synchronized on all sensors. Performances of data fusion algorithms are evaluated using simulated data and also validated on real data. The scenarios under analysis contain multiple targets with close and crossing trajectories involving data association uncertainties.

  14. Distributed data fusion across multiple hard and soft mobile sensor platforms

    NASA Astrophysics Data System (ADS)

    Sinsley, Gregory

    One of the biggest challenges currently facing the robotics field is sensor data fusion. Unmanned robots carry many sophisticated sensors including visual and infrared cameras, radar, laser range finders, chemical sensors, accelerometers, gyros, and global positioning systems. By effectively fusing the data from these sensors, a robot would be able to form a coherent view of its world that could then be used to facilitate both autonomous and intelligent operation. Another distinct fusion problem is that of fusing data from teammates with data from onboard sensors. If an entire team of vehicles has the same worldview they will be able to cooperate much more effectively. Sharing worldviews is made even more difficult if the teammates have different sensor types. The final fusion challenge the robotics field faces is that of fusing data gathered by robots with data gathered by human teammates (soft sensors). Humans sense the world completely differently from robots, which makes this problem particularly difficult. The advantage of fusing data from humans is that it makes more information available to the entire team, thus helping each agent to make the best possible decisions. This thesis presents a system for fusing data from multiple unmanned aerial vehicles, unmanned ground vehicles, and human observers. The first issue this thesis addresses is that of centralized data fusion. This is a foundational data fusion issue, which has been very well studied. Important issues in centralized fusion include data association, classification, tracking, and robotics problems. Because these problems are so well studied, this thesis does not make any major contributions in this area, but does review it for completeness. The chapter on centralized fusion concludes with an example unmanned aerial vehicle surveillance problem that demonstrates many of the traditional fusion methods. The second problem this thesis addresses is that of distributed data fusion. Distributed data fusion

  15. Optimal control theory applied to fusion plasma thermal stabilization

    SciTech Connect

    Sager, G.; Maya, I.; Miley, G.H.

    1985-07-01

    Optimal control theory is applied to determine feedback control for thermal stability of a driven, subingnition tokamak controlled by fuel injection and additional heating. It was found that the simplifications of the plasma burn dynamics and the control figure of merit required for the synthesis of optimal feedback laws were valid. Control laws were determined which allowed thermal stability in plasmas subject to 10% offset in temperature. The minimum ignition margin (defined as the difference between ignition temperature and the subignition operating point) was found to be 0.95 keV, corresponding to steady state heating requirements of less than 2% of fusion power.

  16. Non-Uniform Fusion Tree Generation in a Dynamic Multi-Sensor System.

    PubMed

    Yeun, Kyuoke; Kim, Daeyoung

    2017-05-04

    This paper addresses the proposal that the number of processed air tracks of a two-tier fusion process can be increased by applying a balanced fusion tree which can balance tracks across local fusion nodes. Every fusion cycle, a fusion process combines duplicate tracks from multiple radars and creates a single integrated air picture (SIAP). The two-tier fusion process divides the fusion process into local and global. The results of the local fusion process, executed at local fusion nodes, are used in the global fusion process. This hierarchical structure can be modeled as a fusion tree: each radar, local fusion node, and the central server is a leaf, internode, and the root, respectively. This paper presents a non-uniform fusion tree generation (NU-FTG) algorithm based on clustering approach. In the NU-FTG, radars with higher scores get more chances to become local fusion nodes. The score of a radar is in proportion to the number of tracks of the radar and its neighbors. All radars execute the NU-FTG independently with the information of their neighbors. Any prior information, such as the appropriate number of local fusion nodes, predefined tree structure, or position of radars, is not required. The NU-FTG is evaluated in the OPNET (Optimized Network Engineering Tool), network simulator. Simulation results show that the NU-FTG performs better than existing clustering methods.

  17. Dynamic Reweighting of Three Modalities for Sensor Fusion

    PubMed Central

    Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J.

    2014-01-01

    We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ±1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a “fixed” reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion. PMID:24498252

  18. Multi-sensor multi-resolution image fusion for improved vegetation and urban area classification

    NASA Astrophysics Data System (ADS)

    Kumar, U.; Milesi, C.; Nemani, R. R.; Basu, S.

    2015-06-01

    In this paper, we perform multi-sensor multi-resolution data fusion of Landsat-5 TM bands (at 30 m spatial resolution) and multispectral bands of World View-2 (WV-2 at 2 m spatial resolution) through linear spectral unmixing model. The advantages of fusing Landsat and WV-2 data are two fold: first, spatial resolution of the Landsat bands increases to WV-2 resolution. Second, integration of data from two sensors allows two additional SWIR bands from Landsat data to the fused product which have advantages such as improved atmospheric transparency and material identification, for example, urban features, construction materials, moisture contents of soil and vegetation, etc. In 150 separate experiments, WV-2 data were clustered in to 5, 10, 15, 20 and 25 spectral classes and data fusion were performed with 3x3, 5x5, 7x7, 9x9 and 11x11 kernel sizes for each Landsat band. The optimal fused bands were selected based on Pearson product-moment correlation coefficient, RMSE (root mean square error) and ERGAS index and were subsequently used for vegetation, urban area and dark objects (deep water, shadows) classification using Random Forest classifier for a test site near Golden Gate Bridge, San Francisco, California, USA. Accuracy assessment of the classified images through error matrix before and after fusion showed that the overall accuracy and Kappa for fused data classification (93.74%, 0.91) was much higher than Landsat data classification (72.71%, 0.70) and WV-2 data classification (74.99%, 0.71). This approach increased the spatial resolution of Landsat data to WV-2 spatial resolution while retaining the original Landsat spectral bands with significant improvement in classification.

  19. Adaptive Multi-sensor Data Fusion Model for In-situ Exploration of Mars

    NASA Astrophysics Data System (ADS)

    Schneiderman, T.; Sobron, P.

    2014-12-01

    Laser Raman spectroscopy (LRS) and laser-induced breakdown spectroscopy (LIBS) can be used synergistically to characterize the geochemistry and mineralogy of potential microbial habitats and biosignatures. The value of LRS and LIBS has been recognized by the planetary science community: (i) NASA's Mars2020 mission features a combined LRS-LIBS instrument, SuperCam, and an LRS instrument, SHERLOC; (ii) an LRS instrument, RLS, will fly on ESA's 2018 ExoMars mission. The advantages of combining LRS and LIBS are evident: (1) LRS/LIBS can share hardware components; (2) LIBS reveals the relative concentration of major (and often trace) elements present in a sample; and (3) LRS yields information on the individual mineral species and their chemical/structural nature. Combining data from LRS and LIBS enables definitive mineral phase identification with precise chemical characterization of major, minor, and trace mineral species. New approaches to data processing are needed to analyze large amounts of LRS+LIBS data efficiently and maximize the scientific return of integrated measurements. Multi-sensor data fusion (MSDF) is a method that allows for robust sample identification through automated acquisition, processing, and combination of data. It optimizes information usage, yielding a more robust characterization of a target than could be acquired through single sensor use. We have developed a prototype fuzzy logic adaptive MSDF model aimed towards the unsupervised characterization of Martian habitats and their biosignatures using LRS and LIBS datasets. Our model also incorporates fusion of microimaging (MI) data - critical for placing analyses in geological and spatial context. Here, we discuss the performance of our novel MSDF model and demonstrate that automated quantification of the salt abundance in sulfate/clay/phyllosilicate mixtures is possible through data fusion of collocated LRS, LIBS, and MI data.

  20. Optimization of wireless Bluetooth sensor systems.

    PubMed

    Lonnblad, J; Castano, J; Ekstrom, M; Linden, M; Backlund, Y

    2004-01-01

    Within this study, three different Bluetooth sensor systems, replacing cables for transmission of biomedical sensor data, have been designed and evaluated. The three sensor architectures are built on 1-, 2- and 3-chip solutions and depending on the monitoring situation and signal character, different solutions are optimal. Essential parameters for all systems have been low physical weight and small size, resistance to interference and interoperability with other technologies as global- or local networks, PC's and mobile phones. Two different biomedical input signals, ECG and PPG (photoplethysmography), have been used to evaluate the three solutions. The study shows that it is possibly to continuously transmit an analogue signal. At low sampling rates and slowly varying parameters, as monitoring the heart rate with PPG, the 1-chip solution is the most suitable, offering low power consumption and thus a longer battery lifetime or a smaller battery, minimizing the weight of the sensor system. On the other hand, when a higher sampling rate is required, as an ECG, the 3-chip architecture, with a FPGA or micro-controller, offers the best solution and performance. Our conclusion is that Bluetooth might be useful in replacing cables of medical monitoring systems.

  1. Optimized data fusion for K-means Laplacian clustering.

    PubMed

    Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A K; De Moor, Bart; Moreau, Yves

    2011-01-01

    We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html.

  2. Optimized data fusion for kernel k-means clustering.

    PubMed

    Yu, Shi; Tranchevent, Léon-Charles; Liu, Xinhai; Glänzel, Wolfgang; Suykens, Johan A K; De Moor, Bart; Moreau, Yves

    2012-05-01

    This paper presents a novel optimized kernel k-means algorithm (OKKC) to combine multiple data sources for clustering analysis. The algorithm uses an alternating minimization framework to optimize the cluster membership and kernel coefficients as a nonconvex problem. In the proposed algorithm, the problem to optimize the cluster membership and the problem to optimize the kernel coefficients are all based on the same Rayleigh quotient objective; therefore the proposed algorithm converges locally. OKKC has a simpler procedure and lower complexity than other algorithms proposed in the literature. Simulated and real-life data fusion applications are experimentally studied, and the results validate that the proposed algorithm has comparable performance, moreover, it is more efficient on large-scale data sets. (The Matlab implementation of OKKC algorithm is downloadable from http://homes.esat.kuleuven.be/~sistawww/bio/syu/okkc.html.).

  3. A Radiosonde Using a Humidity Sensor Array with a Platinum Resistance Heater and Multi-Sensor Data Fusion

    PubMed Central

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-01-01

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes. PMID:23857263

  4. A radiosonde using a humidity sensor array with a platinum resistance heater and multi-sensor data fusion.

    PubMed

    Shi, Yunbo; Luo, Yi; Zhao, Wenjie; Shang, Chunxue; Wang, Yadong; Chen, Yinsheng

    2013-07-12

    This paper describes the design and implementation of a radiosonde which can measure the meteorological temperature, humidity, pressure, and other atmospheric data. The system is composed of a CPU, microwave module, temperature sensor, pressure sensor and humidity sensor array. In order to effectively solve the humidity sensor condensation problem due to the low temperatures in the high altitude environment, a capacitive humidity sensor including four humidity sensors to collect meteorological humidity and a platinum resistance heater was developed using micro-electro-mechanical-system (MEMS) technology. A platinum resistance wire with 99.999% purity and 0.023 mm in diameter was used to obtain the meteorological temperature. A multi-sensor data fusion technique was applied to process the atmospheric data. Static and dynamic experimental results show that the designed humidity sensor with platinum resistance heater can effectively tackle the sensor condensation problem, shorten response times and enhance sensitivity. The humidity sensor array can improve measurement accuracy and obtain a reliable initial meteorological humidity data, while the multi-sensor data fusion technique eliminates the uncertainty in the measurement. The radiosonde can accurately reflect the meteorological changes.

  5. Optimal Sensor Locations for Structural Identification

    NASA Technical Reports Server (NTRS)

    Udwadia, F. E.; Garba, J.

    1985-01-01

    The optimum sensor location problem, OSLP, may be thought of in terms of the set of systems, S, the class of input time functions, I, and the identification algorithm (estimator) used, E. Thus, for a given time history of input, the technique of determining the OSL requires, in general, the solution of the optimization and the identification problems simultaneously. A technique which uncouples the two problems is introduced. This is done by means of the concept of an efficient estimator for which the covariance of the parameter estimates is inversely proportional to the Fisher Information Matrix.

  6. A New Framework for Robust Retrieval and Fusion of Active/Passive Multi-Sensor Precipitation

    NASA Astrophysics Data System (ADS)

    Ebtehaj, M.; Foufoula-Georgiou, E.; Bras, R. L.

    2014-12-01

    This study introduces a new inversion approach for simultaneous retrieval and optimal fusion of multi-sensor passive/active precipitation spaceborne observations relevant to the Global Precipitation Measurement (GPM) constellation of satellites. This approach uses a modern Maximum a Posteriori (MAP) Bayesian estimator and variational principles to obtain a robust estimate of the rainfall profile from multiple sources of observationally- and physically-based a priori generated databases. The MAP estimator makes use of a constrained mixed and -norm regularization that warranties improved stability and reduced estimation error compared to the classic least-squares estimators, often used in the Bayesian rainfall retrieval techniques. We demonstrate the promise of our framework via detailed algorithmic implementation using the passive and active multi-sensor observations provided by the microwave imager (TMI) and precipitation radar (PR) aboard the Tropical Rainfall Measuring Mission (TRMM) satellite. To this end, we simultaneously obtain an observationally-driven retrieval of the entire precipitation profile using the coincidental TMI-PR observations and then optimally combine it with a first guess derived from physically-consistent a priori collected database of the TMI-2A12 operational product. We elucidate the performance of our algorithm for a wide range of storm environments with a specific focus on extreme and light precipitation events over land and coastal areas for hydrologic applications. The results are also validated versus the ground based observations and the standard TRMM products in seasonal and annual timescales.

  7. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  8. Staggered scheduling of sensor estimation and fusion for tracking over long-haul links

    SciTech Connect

    Liu, Qiang; Rao, Nageswara S. V.; Wang, Xin

    2016-08-01

    Networked sensing can be found in a multitude of real-world applications. Here, we focus on the communication-and computation-constrained long-haul sensor networks, where sensors are remotely deployed over a vast geographical area to perform certain tasks. Of special interest is a class of such networks where sensors take measurements of one or more dynamic targets and send their state estimates to a remote fusion center via long-haul satellite links. The severe loss and delay over such links can easily reduce the amount of sensor data received by the fusion center, thereby limiting the potential information fusion gain and resulting in suboptimal tracking performance. In this paper, starting with the temporal-domain staggered estimation for an individual sensor, we explore the impact of the so-called intra-state prediction and retrodiction on estimation errors. We then investigate the effect of such estimation scheduling across different sensors on the spatial-domain fusion performance, where the sensing time epochs across sensors are scheduled in an asynchronous and staggered manner. In particular, the impact of communication delay and loss as well as sensor bias on such scheduling is explored by means of numerical and simulation studies that demonstrate the validity of our analysis.

  9. Staggered scheduling of sensor estimation and fusion for tracking over long-haul links

    SciTech Connect

    Liu, Qiang; Rao, Nageswara S. V.; Wang, Xin

    2016-08-01

    Networked sensing can be found in a multitude of real-world applications. Here, we focus on the communication-and computation-constrained long-haul sensor networks, where sensors are remotely deployed over a vast geographical area to perform certain tasks. Of special interest is a class of such networks where sensors take measurements of one or more dynamic targets and send their state estimates to a remote fusion center via long-haul satellite links. The severe loss and delay over such links can easily reduce the amount of sensor data received by the fusion center, thereby limiting the potential information fusion gain and resulting in suboptimal tracking performance. In this paper, starting with the temporal-domain staggered estimation for an individual sensor, we explore the impact of the so-called intra-state prediction and retrodiction on estimation errors. We then investigate the effect of such estimation scheduling across different sensors on the spatial-domain fusion performance, where the sensing time epochs across sensors are scheduled in an asynchronous and staggered manner. In particular, the impact of communication delay and loss as well as sensor bias on such scheduling is explored by means of numerical and simulation studies that demonstrate the validity of our analysis.

  10. Staggered scheduling of sensor estimation and fusion for tracking over long-haul links

    DOE PAGES

    Liu, Qiang; Rao, Nageswara S. V.; Wang, Xin

    2016-08-01

    Networked sensing can be found in a multitude of real-world applications. Here, we focus on the communication-and computation-constrained long-haul sensor networks, where sensors are remotely deployed over a vast geographical area to perform certain tasks. Of special interest is a class of such networks where sensors take measurements of one or more dynamic targets and send their state estimates to a remote fusion center via long-haul satellite links. The severe loss and delay over such links can easily reduce the amount of sensor data received by the fusion center, thereby limiting the potential information fusion gain and resulting in suboptimalmore » tracking performance. In this paper, starting with the temporal-domain staggered estimation for an individual sensor, we explore the impact of the so-called intra-state prediction and retrodiction on estimation errors. We then investigate the effect of such estimation scheduling across different sensors on the spatial-domain fusion performance, where the sensing time epochs across sensors are scheduled in an asynchronous and staggered manner. In particular, the impact of communication delay and loss as well as sensor bias on such scheduling is explored by means of numerical and simulation studies that demonstrate the validity of our analysis.« less

  11. SPOT-A SENSOR PLACEMENT OPTIMIZATION TOOL FOR ...

    EPA Pesticide Factsheets

    journal article This paper presents SPOT, a Sensor Placement Optimization Tool. SPOT provides a toolkit that facilitates research in sensor placement optimization and enables the practical application of sensor placement solvers to real-world CWS design applications. This paper provides an overview of SPOT’s key features, and then illustrates how this tool can be flexibly applied to solve a variety of different types of sensor placement problems.

  12. SPOT-A SENSOR PLACEMENT OPTIMIZATION TOOL FOR ...

    EPA Pesticide Factsheets

    journal article This paper presents SPOT, a Sensor Placement Optimization Tool. SPOT provides a toolkit that facilitates research in sensor placement optimization and enables the practical application of sensor placement solvers to real-world CWS design applications. This paper provides an overview of SPOT’s key features, and then illustrates how this tool can be flexibly applied to solve a variety of different types of sensor placement problems.

  13. Efficient Multi-Source Data Fusion for Decentralized Sensor Networks

    DTIC Science & Technology

    2006-10-01

    Information Fusion, 7-10 August 2001, Montreal, Canada. [2] E. Nettleton , “Decentralised Architectures for Tracking and Navigation with Multiple Flight...Proceedings of the 5th International Conference on Information Fusion, 8-11 July 2002, Annapolis, MD. [16] M. Ridley, E Nettleton , S. Sukkarieh and H...Decentralized Sensing Networks,” Proceedings of the 4th International Conference on Information Fusion, 7-10 August 2001, Montreal, Canada. [2] E. Nettleton

  14. Optimization of Microelectronic Devices for Sensor Applications

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    2000-01-01

    The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.

  15. Phase 1 report on sensor technology, data fusion and data interpretation for site characterization

    SciTech Connect

    Beckerman, M.

    1991-10-01

    In this report we discuss sensor technology, data fusion and data interpretation approaches of possible maximal usefulness for subsurface imaging and characterization of land-fill waste sites. Two sensor technologies, terrain conductivity using electromagnetic induction and ground penetrating radar, are described and the literature on the subject is reviewed. We identify the maximum entropy stochastic method as one providing a rigorously justifiable framework for fusing the sensor data, briefly summarize work done by us in this area, and examine some of the outstanding issues with regard to data fusion and interpretation. 25 refs., 17 figs.

  16. Fusion of threshold rules for target detection in wireless sensor networks

    SciTech Connect

    Zhu, Mengxia; Ding, Shi-You; Brooks, Richard R; Wu, Qishi; Rao, Nageswara S

    2010-03-01

    We propose a binary decision fusion rule that reaches a global decision on the presence of a target by integrating local decisions made by multiple sensors. Without requiring a priori probability of target presence, the fusion threshold bounds derived using Chebyshev's inequality ensure a higher hit rate and lower false alarm rate compared to the weighted averages of individual sensors. The Monte Carlo-based simulation results show that the proposed approach significantly improves target detection performance, and can also be used to guide the actual threshold selection in practical sensor network implementation under certain error rate constraints.

  17. Statistical sensor fusion of ECG data using automotive-grade sensors

    NASA Astrophysics Data System (ADS)

    Koenig, A.; Rehg, T.; Rasshofer, R.

    2015-11-01

    Driver states such as fatigue, stress, aggression, distraction or even medical emergencies continue to be yield to severe mistakes in driving and promote accidents. A pathway towards improving driver state assessment can be found in psycho-physiological measures to directly quantify the driver's state from physiological recordings. Although heart rate is a well-established physiological variable that reflects cognitive stress, obtaining heart rate contactless and reliably is a challenging task in an automotive environment. Our aim was to investigate, how sensory fusion of two automotive grade sensors would influence the accuracy of automatic classification of cognitive stress levels. We induced cognitive stress in subjects and estimated levels from their heart rate signals, acquired from automotive ready ECG sensors. Using signal quality indices and Kalman filters, we were able to decrease Root Mean Squared Error (RMSE) of heart rate recordings by 10 beats per minute. We then trained a neural network to classify the cognitive workload state of subjects from heart rate and compared classification performance for ground truth, the individual sensors and the fused heart rate signal. We obtained an increase of 5 % higher correct classification by fusing signals as compared to individual sensors, staying only 4 % below the maximally possible classification accuracy from ground truth. These results are a first step towards real world applications of psycho-physiological measurements in vehicle settings. Future implementations of driver state modeling will be able to draw from a larger pool of data sources, such as additional physiological values or vehicle related data, which can be expected to drive classification to significantly higher values.

  18. A Bayesian machine learning method for sensor selection and fusion with application to on-board fault diagnostics

    NASA Astrophysics Data System (ADS)

    Subrahmanya, Niranjan; Shin, Yung C.; Meckl, Peter H.

    2010-01-01

    In applications like feature-level sensor fusion, the problem of selecting an optimal number of sensors can lead to reduced maintenance costs and the creation of compact online databases for future use. This problem of sensor selection can be reduced to the problem of selecting an optimal set of groups of features during model selection. This is a more complex problem than the problem of feature selection, which has been recognized as a key aspect of statistical model identification. This work proposes a new algorithm based on the use of a Bayesian framework for the purpose of selecting groups of features during regression and classification. The hierarchical Bayesian formulation introduces grouping for the parameters of a generalized linear model and the model hyper-parameters are estimated using an empirical Bayes procedure. A novel aspect of the algorithm is its ability to simultaneously perform feature selection within groups to reduce over-fitting of the data. Further, the parameters obtained from this algorithm can be used to obtain a rank order among the selected sensors. The performance of the algorithm is first tested on a synthetic regression example. Finally, it is applied to the problem of fault detection in diesel engines (30,000 data records from 43 sensors, 8 classes) and used to compare the misclassification rates with a varying number of sensors.

  19. Bearings Only Tracking with Fusion from Heterogenous Passive Sensors: ESM/EO and Acoustic

    DTIC Science & Technology

    2017-02-01

    Bearings-Only Tracking with Fusion from Heterogenous Passive Sensors: ESM/EO and Acoustic Rong Yang, Gee Wah Ng DSO National Laboratories, 20 Science...moving or stationary platform. The two sensors are an ESM/EO with negligible propagation delay and an acoustic sensor with significant propagation...delay. Since target range information is contained in the acoustic propagation delay, the problem is therefore observable even when the platform is

  20. Gesture-Directed Sensor-Information Fusion for Communication in Hazardous Environments

    DTIC Science & Technology

    2010-06-01

    sensors for gesture recognition [1], [2]. An important future step to enhance the effectiveness of the war fighter is to integrate CBRN and other...addition to the standard eGlove magnetic and motion gesture recognition sensors. War fighters progressing through a battlespace are now providing...a camera for gesture recognition is absolutely not an option for a CBRN war fighter in a battlefield scenario. Multi sensor fusion is commonly

  1. Multi-sensor fusion system using wavelet-based detection algorithm applied to physiological monitoring under high-G environment

    NASA Astrophysics Data System (ADS)

    Ryoo, Han Chool

    2000-06-01

    A significant problem in physiological state monitoring systems with single data channels is high rates of false alarm. In order to reduce false alarm probability, several data channels can be integrated to enhance system performance. In this work, we have investigated a sensor fusion methodology applicable to physiological state monitoring, which combines local decisions made from dispersed detectors. Difficulties in biophysical signal processing are associated with nonstationary signal patterns and individual characteristics of human physiology resulting in nonidentical observation statistics. Thus a two compartment design, a modified version of well established fusion theory in communication systems, is presented and applied to biological signal processing where we combine discrete wavelet transforms (DWT) with sensor fusion theory. The signals were decomposed in time-frequency domain by discrete wavelet transform (DWT) to capture localized transient features. Local decisions by wavelet power analysis are followed by global decisions at the data fusion center operating under an optimization criterion, i.e., minimum error criterion (MEC). We used three signals acquired from human volunteers exposed to high-G forces at the human centrifuge/dynamic flight simulator facility in Warminster, PA. The subjects performed anti-G straining maneuvers to protect them from the adverse effects of high-G forces. These maneuvers require muscular tensing and altered breathing patterns. We attempted to determine the subject's state by detecting the presence or absence of the voluntary anti-G straining maneuvers (AGSM). During the exposure to high G force the respiratory patterns, blood pressure and electroencephalogram (EEG) were measured to determine changes in the subject's state. Experimental results show that the probability of false alarm under MEC can be significantly reduced by applying the same rule found at local thresholds to all subjects, and MEC can be employed as a

  2. The research of auto-focusing method for the image mosaic and fusion system with multi-sensor

    NASA Astrophysics Data System (ADS)

    Pang, Ke; Yao, Suying; Shi, Zaifeng; Xu, Jiangtao; Liu, Jiangming

    2013-09-01

    In modern image processing, due to the development of digital image processing, the focus of the sensor can be automatically set by the digital processing system through computation. In the other hand, the auto-focusing synchronously and consistently is one of the most important factors for image mosaic and fusion processing, especially for the system with multi-sensor which are put on one line in order to gain the wide angle video information. Different images sampled by the sensors with different focal length values will always increase the complexity of the affine matrix of the image mosaic and fusion in next, which potentially reducing the efficiency of the system and consuming more power. Here, a new fast evaluation method based on the gray value variance of the image pixel is proposed to find the common focal length value for all sensors to achieve the better image sharpness. For the multi-frame pictures that are sampled from different sensors that have been adjusted and been regarded as time synchronization, the gray value variances of the adjacent pixels are determined to generate one curve. This curve is the focus measure function which describes the relationship between the image sharpness and the focal length value of the sensor. On the basis of all focus measure functions of all sensors in the image processing system, this paper uses least square method to carry out the data fitting to imitate the disperse curves and give one objective function for the multi-sensor system, and then find the optimal solution corresponding to the extreme value of the image sharpness according to the evaluation of the objective function. This optimal focal length value is the common parameter for all sensors in this system. By setting the common focal length value, in the premise of ensuring the image sharpness, the computing of the affine matrix which is the core processing of the image mosaic and fusion which stitching all those pictures into one wide angle image will be

  3. Assessing the Performance of Sensor Fusion Methods: Application to Magnetic-Inertial-Based Human Body Tracking.

    PubMed

    Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2016-01-26

    Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor's uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process.

  4. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  5. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  6. Real-time EO/IR sensor fusion on a portable computer and head-mounted display

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Topiwala, Pankaj

    2007-04-01

    Multi-sensor platforms are widely used in surveillance video systems for both military and civilian applications. The complimentary nature of different types of sensors (e.g. EO and IR sensors) makes it possible to observe the scene under almost any condition (day/night/fog/smoke). In this paper, we propose an innovative EO/IR sensor registration and fusion algorithm which runs real-time on a portable computing unit with head-mounted display. The EO/IR sensor suite is mounted on a helmet for a dismounted soldier and the fused scene is shown in the goggle display upon the processing on a portable computing unit. The linear homography transformation between images from the two sensors is precomputed for the mid-to-far scene, which reduces the computational cost for the online calibration of the sensors. The system is implemented in a highly optimized C++ code, with MMX/SSE, and performing a real-time registration. The experimental results on real captured video show the system works very well both in speed and in performance.

  7. Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks

    PubMed Central

    Fu, Jun-Song; Liu, Yun

    2015-01-01

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211

  8. Double cluster heads model for secure and accurate data fusion in wireless sensor networks.

    PubMed

    Fu, Jun-Song; Liu, Yun

    2015-01-19

    Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy.

  9. Remote Sensing Image Fusion Using Ica and Optimized Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Hnatushenko, V. V.; Vasyliev, V. V.

    2016-06-01

    In remote-sensing image processing, fusion (pan-sharpening) is a process of merging high-resolution panchromatic and lower resolution multispectral (MS) imagery to create a single high-resolution color image. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. However, the pan-sharpening image produced by these methods gets the high color distortion of spectral information. In this paper, to minimize the spectral distortion we propose a remote sensing image fusion method which combines the Independent Component Analysis (ICA) and optimization wavelet transform. The proposed method is based on selection of multiscale components obtained after the ICA of images on the base of their wavelet decomposition and formation of linear forms detailing coefficients of the wavelet decomposition of images brightness distributions by spectral channels with iteratively adjusted weights. These coefficients are determined as a result of solving an optimization problem for the criterion of maximization of information entropy of the synthesized images formed by means of wavelet reconstruction. Further, reconstruction of the images of spectral channels is done by the reverse wavelet transform and formation of the resulting image by superposition of the obtained images. To verify the validity, the new proposed method is compared with several techniques using WorldView-2 satellite data in subjective and objective aspects. In experiments we demonstrated that our scheme provides good spectral quality and efficiency. Spectral and spatial quality metrics in terms of RASE, RMSE, CC, ERGAS and SSIM are used in our experiments. These synthesized MS images differ by showing a better contrast and clarity on the boundaries of the "object of interest - the background". The results show that the proposed approach performs better than some compared methods according to the performance metrics.

  10. Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter

    PubMed Central

    Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei

    2016-01-01

    Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system’s error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the

  11. Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter.

    PubMed

    Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei

    2016-11-02

    Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system's error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the

  12. Combined Statistical, Biological and Categorical Models for Sensor Fusion

    DTIC Science & Technology

    2010-08-01

    datum it has perceived, is its percept sequence, and the agent’s behavior is determined by a mapping or a function from the set of percept sequences...coordinating agent coordinating agent sensor agent sensor agent sensor agent classifying agent Currently, agents interact through a simple blackboard

  13. Optimization of integrated antennas for wireless sensors

    NASA Astrophysics Data System (ADS)

    Gandelli, A.; Mussetta, M.; Zich, R. E.

    2007-01-01

    Modern advances in sensor technology, digital electronics and radio frequency design have enabled the development of cheap, small, low-power sensory devices, integrating sensing, processing and communication capabilities. This work aims to present an overview of the benefits and of the most recent advances in antenna technologies, investigating the possibility of integrating enhanced solutions in a large distributed wireless sensor network for the environmental monitoring. The antenna in fact is the key element in order to fully integrate a wireless microsystemon a single chip. The integration requires a small antenna on a low-loss substratematerial compatible with the microelectronic devices. In fact, communication is usually the most energy intensive operation a node performs. Therefore, at each terminal the application of integrated and miniaturized antennas can have a significant impact, in terms of not only system performance but also cost, energy consumptions and terminal physical size. An integrated design technique of a microstrip antenna on a complex dielectric substrate is here presented. For small bit rate wireless networks, microstrip antennas are a good choice. The simplicity of realization, the low cost, the flexibility of use and the reduced dimensions make perfect for the on-chip integration. These objectives are instrumental in selecting elements that can conform to the geometry of the device. The optimization of the wireless device is also presented, to carefully adjust also parameters as the shape and dimensions of the antenna, in order to develop different layers of communication in the same device, thus endowing with multiband capabilities.

  14. Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis

    PubMed Central

    Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan

    2016-01-01

    Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection. PMID:27649193

  15. Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis.

    PubMed

    Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan

    2016-09-15

    Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster-Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection.

  16. Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission

    PubMed Central

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2017-01-01

    This paper is concerned with the optimal fusion estimation problem in networked stochastic systems with bounded random delays and packet dropouts, which unavoidably occur during the data transmission in the network. The measured outputs from each sensor are perturbed by random parameter matrices and white additive noises, which are cross-correlated between the different sensors. Least-squares fusion linear estimators including filter, predictor and fixed-point smoother, as well as the corresponding estimation error covariance matrices are designed via the innovation analysis approach. The proposed recursive algorithms depend on the delay probabilities at each sampling time, but do not to need to know if a particular measurement is delayed or not. Moreover, the knowledge of the signal evolution model is not required, as the algorithms need only the first and second order moments of the processes involved. Some of the practical situations covered by the proposed system model with random parameter matrices are analyzed and the influence of the delays in the estimation accuracy are examined in a numerical example. PMID:28524112

  17. Assessing the Performance of Sensor Fusion Methods: Application to Magnetic-Inertial-Based Human Body Tracking

    PubMed Central

    Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2016-01-01

    Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor’s uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process. PMID:26821027

  18. A novel tiered sensor fusion approach for terrain characterization and safe landing assessment

    NASA Technical Reports Server (NTRS)

    Serrano, Navid; Bajracharya, Max; Howard, Ayanna; Seraji, Homayoun

    2005-01-01

    This paper presents a novel tiered sensor fusion methodology for real-time terrain safety assessment. A combination of active and passive sensors, specifically, radar, lidar, and camera, operate in three tiers according to their inherent ranges of operation. Low-level terrain features (e.g. slope, roughness) and high-level terrain features (e.g. hills, craters) are integrated using principles of reasoning under uncertainty. Three methodologies are used to infer landing safety: Fuzzy Reasoning, Probabilistic Reasoning, and Evidential Reasoning. The safe landing predictions from the three fusion engines are consolidated in a subsequent decision fusion stage aimed at combining the strengths of each fusion methodology. Results from simulated spacecraft descents are presented and discussed.

  19. A novel tiered sensor fusion approach for terrain characterization and safe landing assessment

    NASA Technical Reports Server (NTRS)

    Serrano, Navid; Bajracharya, Max; Howard, Ayanna; Seraji, Homayoun

    2005-01-01

    This paper presents a novel tiered sensor fusion methodology for real-time terrain safety assessment. A combination of active and passive sensors, specifically, radar, lidar, and camera, operate in three tiers according to their inherent ranges of operation. Low-level terrain features (e.g. slope, roughness) and high-level terrain features (e.g. hills, craters) are integrated using principles of reasoning under uncertainty. Three methodologies are used to infer landing safety: Fuzzy Reasoning, Probabilistic Reasoning, and Evidential Reasoning. The safe landing predictions from the three fusion engines are consolidated in a subsequent decision fusion stage aimed at combining the strengths of each fusion methodology. Results from simulated spacecraft descents are presented and discussed.

  20. Nonlinearity Analysis and Parameters Optimization for an Inductive Angle Sensor

    PubMed Central

    Ye, Lin; Yang, Ming; Xu, Liang; Zhuang, Xiaoqi; Dong, Zhaopeng; Li, Shiyang

    2014-01-01

    Using the finite element method (FEM) and particle swarm optimization (PSO), a nonlinearity analysis based on parameter optimization is proposed to design an inductive angle sensor. Due to the structure complexity of the sensor, understanding the influences of structure parameters on the nonlinearity errors is a critical step in designing an effective sensor. Key parameters are selected for the design based on the parameters' effects on the nonlinearity errors. The finite element method and particle swarm optimization are combined for the sensor design to get the minimal nonlinearity error. In the simulation, the nonlinearity error of the optimized sensor is 0.053% in the angle range from −60° to 60°. A prototype sensor is manufactured and measured experimentally, and the experimental nonlinearity error is 0.081% in the angle range from −60° to 60°. PMID:24590353

  1. Dynamic estimation of water hyacinth area using fusion of satellite and GPS sensors

    NASA Astrophysics Data System (ADS)

    Sun, Ling; Zhu, Zesheng

    2017-08-01

    The interaction of water hyacinth area with growth is known to be strongly influenced by area size, but little is known about the interdependent role that size and time have on dynamic estimation of water hyacinth area. We report on the fusion of satellite and GPS sensor data into area growth model as a function of area and time. We employ a multi-sensor fusion technique that is able to generate uniform data of fitting area growth model with complete control of area and time. Evidence of an overall Goodness of Fit Index of 0.9753 was obtained by using conventional statistic analysis. These findings suggest that the multi-sensor fusion technique readily supports area growth model development with highly resolution. The differential equation is good at describing the spatial spread of water hyacinth. Moreover, it was found that area growth model enjoy an appreciable advantage when it comes to harvesting water hyacinth.

  2. The Architecture of Information Fusion System Ingreenhouse Wireless Sensor Network Based on Multi-Agent

    NASA Astrophysics Data System (ADS)

    Zhu, Wenting; Chen, Ming

    In view of current unprogressive situation of factory breeding in aquaculture, this article designed a standardized, informationized and intelligentized aquaculture system, proposed a information fusion architecture based on multi-agent in greenhouse wireless sensor network (GWSN), and researched mainly the structural characteristic of the four-classed information fusion based on distributed multi-agent and the method to construct the structure inside of every agent.

  3. Hybrid intelligent control concepts for optimal data fusion

    NASA Astrophysics Data System (ADS)

    Llinas, James

    1994-02-01

    In the post-Cold War era, Naval surface ship operations will be largely conducted in littoral waters to support regional military missions of all types, including humanitarian and evacuation activities, and amphibious mission execution. Under these conditions, surface ships will be much more isolated and vulnerable to a variety of threats, including maneuvering antiship missiles. To deal with these threats, the optimal employment of multiple shipborne sensors for maximum vigilance is paramount. This paper characterizes the sensor management problem as one of intelligent control, identifies some of the key issues in controller design, and presents one approach to controller design which is soon to be implemented and evaluated. It is argued that the complexity and hierarchical nature of problem formulation demands a hybrid combination of knowledge-based methods and scheduling techniques from 'hard' real-time systems theory for its solution.

  4. Reliability estimates for selected sensors in fusion applications

    SciTech Connect

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety.

  5. A flexible data fusion architecture for persistent surveillance using ultra-low-power wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hanson, Jeffrey A.; McLaughlin, Keith L.; Sereno, Thomas J.

    2011-06-01

    We have developed a flexible, target-driven, multi-modal, physics-based fusion architecture that efficiently searches sensor detections for targets and rejects clutter while controlling the combinatoric problems that commonly arise in datadriven fusion systems. The informational constraints imposed by long lifetime requirements make systems vulnerable to false alarms. We demonstrate that our data fusion system significantly reduces false alarms while maintaining high sensitivity to threats. In addition, mission goals can vary substantially in terms of targets-of-interest, required characterization, acceptable latency, and false alarm rates. Our fusion architecture provides the flexibility to match these trade-offs with mission requirements unlike many conventional systems that require significant modifications for each new mission. We illustrate our data fusion performance with case studies that span many of the potential mission scenarios including border surveillance, base security, and infrastructure protection. In these studies, we deployed multi-modal sensor nodes - including geophones, magnetometers, accelerometers and PIR sensors - with low-power processing algorithms and low-bandwidth wireless mesh networking to create networks capable of multi-year operation. The results show our data fusion architecture maintains high sensitivities while suppressing most false alarms for a variety of environments and targets.

  6. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph.

    PubMed

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-03-21

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method.

  7. Sensor and data fusion of remotely sensed wide-area geospatial targets

    NASA Astrophysics Data System (ADS)

    Churchill, Stephen

    This thesis consists of the examination of methodologies for sensor fusion and data fusion of remotely sensed, sparse geospatial targets. Methods for attaining an increased awareness of targets in both tactical and strategic roles are proposed and examined. The example methodologies are demonstrated, and areas for further research noted. Discussions of the proposed methods are carried forth in the context of iceberg detection. Amongst the difficulties associated with the combination of sensor parameters and sensor data are the wide variety of technologies, performance ability, coverage, and reliability that are available to those users of remote sensing technology. Typical sensors include airborne search radars, marine search radars, surface wave radar, and satellite synthetic aperture radar. The ability to mitigate the related parametric variances is the test of an appropriate sensor or data fusion algorithm. Documented herein are the efforts to find such an algorithm using various statistical methods. Primary among these is Bayes Theorem combined with tracking systems such the multiple hypothesis tracker. This and other methodologies are explored and evaluated, where appropriate. It will be demonstrated that such a methodology can combine sensor data returns to provide high performance, wide-area, situational awareness with sensors considered to have poor performance.

  8. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph

    PubMed Central

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-01-01

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. PMID:28335570

  9. Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase

    PubMed Central

    Lu, Kelin; Zhou, Rui

    2016-01-01

    A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883

  10. Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.

    PubMed

    Lu, Kelin; Zhou, Rui

    2016-08-15

    A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.

  11. Autonomous navigation vehicle system based on robot vision and multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Wu, Lihong; Chen, Yingsong; Cui, Zhouping

    2011-12-01

    The architecture of autonomous navigation vehicle based on robot vision and multi-sensor fusion technology is expatiated in this paper. In order to acquire more intelligence and robustness, accurate real-time collection and processing of information are realized by using this technology. The method to achieve robot vision and multi-sensor fusion is discussed in detail. The results simulated in several operating modes show that this intelligent vehicle has better effects in barrier identification and avoidance and path planning. And this can provide higher reliability during vehicle running.

  12. Extended Logic Intelligent Processing System for a Sensor Fusion Processor Hardware

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Thomas, Tyson; Li, Wei-Te; Daud, Taher; Fabunmi, James

    2000-01-01

    The paper presents the hardware implementation and initial tests from a low-power, highspeed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) is described, which combines rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor signals in compact low power VLSI. The development of the ELIPS concept is being done to demonstrate the interceptor functionality which particularly underlines the high speed and low power requirements. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Processing speeds of microseconds have been demonstrated using our test hardware.

  13. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors.

    PubMed

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-07-03

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor.

  14. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors

    PubMed Central

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  15. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be

  16. Data fusion on a distributed heterogeneous sensor network.

    SciTech Connect

    Lamborn, Peter; Williams, Pamela J.

    2006-02-01

    Alarm-based sensor systems are being explored as a tool to expand perimeter security for facilities and force protection. However, the collection of increased sensor data has resulted in an insufficient solution that includes faulty data points. Data analysis is needed to reduce nuisance and false alarms, which will improve officials decision making and confidence levels in the system's alarms. Moreover, operational costs can be allayed and losses mitigated if authorities are alerted only when a real threat is detected. In the current system, heuristics such as persistence of alarm and type of sensor that detected an event are used to guide officials responses. We hypothesize that fusing data from heterogeneous sensors in the sensor field can provide more complete situational awareness than looking at individual sensor data. We propose a two stage approach to reduce false alarms. First, we use self organizing maps to cluster sensors based on global positioning coordinates and then train classifiers on the within cluster data to obtain a local view of the event. Next, we train a classifier on the local results to compute a global solution. We investigate the use of machine learning techniques, such as k-nearest neighbor, neural networks, and support vector machines to improve alarm accuracy. On simulated sensor data, the proposed approach identifies false alarms with greater accuracy than a weighted voting algorithm.

  17. Comparison of pH Data Measured with a pH Sensor Array Using Different Data Fusion Methods

    PubMed Central

    Liao, Yi-Hung; Chou, Jung-Chuan

    2012-01-01

    This paper introduces different data fusion methods which are used for an electrochemical measurement using a sensor array. In this study, we used ruthenium dioxide sensing membrane pH electrodes to form a sensor array. The sensor array was used for detecting the pH values of grape wine, generic cola drink and bottled base water. The measured pH data were used for data fusion methods to increase the reliability of the measured results, and we also compared the fusion results with other different data fusion methods.

  18. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    NASA Technical Reports Server (NTRS)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  19. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    NASA Technical Reports Server (NTRS)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1993-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES (Geostationary Operational Environmental Satellite), AVHRR (Advanced Very High Resolution Radiometer), and SSM/I (Special Sensor Microwave Imager) sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS (Earth Observation SystemData/Information System) prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  20. Scalable sensor management for automated fusion and tactical reconnaissance

    NASA Astrophysics Data System (ADS)

    Walls, Thomas J.; Wilson, Michael L.; Partridge, Darin C.; Haws, Jonathan R.; Jensen, Mark D.; Johnson, Troy R.; Petersen, Brad D.; Sullivan, Stephanie W.

    2013-05-01

    The capabilities of tactical intelligence, surveillance, and reconnaissance (ISR) payloads are expanding from single sensor imagers to integrated systems-of-systems architectures. Increasingly, these systems-of-systems include multiple sensing modalities that can act as force multipliers for the intelligence analyst. Currently, the separate sensing modalities operate largely independent of one another, providing a selection of operating modes but not an integrated intelligence product. We describe here a Sensor Management System (SMS) designed to provide a small, compact processing unit capable of managing multiple collaborative sensor systems on-board an aircraft. Its purpose is to increase sensor cooperation and collaboration to achieve intelligent data collection and exploitation. The SMS architecture is designed to be largely sensor and data agnostic and provide flexible networked access for both data providers and data consumers. It supports pre-planned and ad-hoc missions, with provisions for on-demand tasking and updates from users connected via data links. Management of sensors and user agents takes place over standard network protocols such that any number and combination of sensors and user agents, either on the local network or connected via data link, can register with the SMS at any time during the mission. The SMS provides control over sensor data collection to handle logging and routing of data products to subscribing user agents. It also supports the addition of algorithmic data processing agents for feature/target extraction and provides for subsequent cueing from one sensor to another. The SMS architecture was designed to scale from a small UAV carrying a limited number of payloads to an aircraft carrying a large number of payloads. The SMS system is STANAG 4575 compliant as a removable memory module (RMM) and can act as a vehicle specific module (VSM) to provide STANAG 4586 compliance (level-3 interoperability) to a non-compliant sensor system

  1. Sensor validation and fusion for gas turbine vibration monitoring

    NASA Astrophysics Data System (ADS)

    Yan, Weizhong; Goebel, Kai F.

    2003-08-01

    Vibration monitoring is an important practice throughout regular operation of gas turbine power systems and, even more so, during characterization tests. Vibration monitoring relies on accurate and reliable sensor readings. To obtain accurate readings, sensors are placed such that the signal is maximized. In the case of characterization tests, strain gauges are placed at the location of vibration modes on blades inside the gas turbine. Due to the prevailing harsh environment, these sensors have a limited life and decaying accuracy, both of which impair vibration assessment. At the same time bandwidth limitations may restrict data transmission, which in turn limits the number of sensors that can be used for assessment. Knowing the sensor status (normal or faulty), and more importantly, knowing the true vibration level of the system all the time is essential for successful gas turbine vibration monitoring. This paper investigates a dynamic sensor validation and system health reasoning scheme that addresses the issues outlined above by considering only the information required to reliably assess system health status. In particular, if abnormal system health is suspected or if the primary sensor is determined to be faulted, information from available "sibling" sensors is dynamically integrated. A confidence expresses the complex interactions of sensor health and system health, their reliabilities, conflicting information, and what the health assessment is. Effectiveness of the scheme in achieving accurate and reliable vibration evaluation is then demonstrated using a combination of simulated data and a small sample of a real-world application data where the vibration of compressor blades during a real time characterization test of a new gas turbine power system is monitored.

  2. Neural Network Model For Fusion Of Visible And Infrared Sensor Outputs

    NASA Astrophysics Data System (ADS)

    Ajjimarangsee, Pongsak; Huntsberger, Terrance L.

    1989-01-01

    Integration of outputs from multiple sensors has been the subject of much of the recent research in the machine vision field. This process is useful in a variety of applications, such as three dimensional interpretation of scenes imaged by multiple cameras, integration of visible and range data, and the fusion of multiple types of sensors. The use of multiple types of sensors for machine vision poses the problem of how to integrate the information from these sensors. This paper presents a neural network model for the fusion of visible and thermal infrared sensor outputs. Since there is no human biological system that can be used as a model for integration of these sensor outputs, alternate biological systems for sensory fusions can serve as starting points. In this paper, a model is developed based upon six types of bimodal neurons found in the optic tectum of the rattlesnake. These neurons integrate visible and thermal infrared sensory inputs. The neural network model has a series of layers which include a layer for unsupervised clustering in the form of self-organizing feature maps, followed by a layer which has multiple filters that are generated by training a neural net with experimental rattlesnake response data. The final layer performs another unsupervised clustering for integration of the output from the filter layer. The results of a number of experiments are also presented.

  3. Computer vision and sensor fusion for detecting buried objects

    SciTech Connect

    Clark, G.A.; Hernandez, J.E.; Sengupta, S.K.; Sherwood, R.J.; Schaich, P.C.; Buhl, M.R.; Kane, R.J.; DelGrande, N.K.

    1992-10-01

    Given multiple images of the surface of the earth from dual-band infrared sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. Supervised learning pattern classifiers (including neural networks,) are used. We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing information from multiple sensor types. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved problem of detecting buried land mines from an airborne standoff platform.

  4. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device

    PubMed Central

    He, Xiang; Aloi, Daniel N.; Li, Jia

    2015-01-01

    Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design. PMID:26694387

  5. Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device.

    PubMed

    He, Xiang; Aloi, Daniel N; Li, Jia

    2015-12-14

    Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design.

  6. Neural network implementations of data association algorithms for sensor fusion

    NASA Technical Reports Server (NTRS)

    Brown, Donald E.; Pittard, Clarence L.; Martin, Worthy N.

    1989-01-01

    The paper is concerned with locating a time varying set of entities in a fixed field when the entities are sensed at discrete time instances. At a given time instant a collection of bivariate Gaussian sensor reports is produced, and these reports estimate the location of a subset of the entities present in the field. A database of reports is maintained, which ideally should contain one report for each entity sensed. Whenever a collection of sensor reports is received, the database must be updated to reflect the new information. This updating requires association processing between the database reports and the new sensor reports to determine which pairs of sensor and database reports correspond to the same entity. Algorithms for performing this association processing are presented. Neural network implementation of the algorithms, along with simulation results comparing the approaches are provided.

  7. Effect of retransmission and retrodiction on estimation and fusion in long-haul sensor networks

    SciTech Connect

    Liu, Qiang; Wang, Xin; Rao, Nageswara S. V.; Brigham, Katharine; Vijaya Kumar, B. V. K.

    2016-01-01

    In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as target tracking. In this work, we study the scenario where sensors take measurements of one or more dynamic targets and send state estimates of the targets to a fusion center via satellite links. The severe loss and delay inherent over the satellite channels reduce the number of estimates successfully arriving at the fusion center, thereby limiting the potential fusion gain and resulting in suboptimal accuracy performance of the fused estimates. In addition, the errors in target-sensor data association can also degrade the estimation performance. To mitigate the effect of imperfect communications on state estimation and fusion, we consider retransmission and retrodiction. The system adopts certain retransmission-based transport protocols so that lost messages can be recovered over time. Besides, retrodiction/smoothing techniques are applied so that the chances of incurring excess delay due to retransmission are greatly reduced. We analyze the extent to which retransmission and retrodiction can improve the performance of delay-sensitive target tracking tasks under variable communication loss and delay conditions. Lastly, simulation results of a ballistic target tracking application are shown in the end to demonstrate the validity of our analysis.

  8. Effect of retransmission and retrodiction on estimation and fusion in long-haul sensor networks

    DOE PAGES

    Liu, Qiang; Wang, Xin; Rao, Nageswara S. V.; ...

    2016-01-01

    In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as target tracking. In this work, we study the scenario where sensors take measurements of one or more dynamic targets and send state estimates of the targets to a fusion center via satellite links. The severe loss and delay inherent over the satellite channels reduce the number of estimates successfully arriving at the fusion center, thereby limiting the potential fusion gain and resulting in suboptimal accuracy performance of the fused estimates. In addition, the errors in target-sensor data association can alsomore » degrade the estimation performance. To mitigate the effect of imperfect communications on state estimation and fusion, we consider retransmission and retrodiction. The system adopts certain retransmission-based transport protocols so that lost messages can be recovered over time. Besides, retrodiction/smoothing techniques are applied so that the chances of incurring excess delay due to retransmission are greatly reduced. We analyze the extent to which retransmission and retrodiction can improve the performance of delay-sensitive target tracking tasks under variable communication loss and delay conditions. Lastly, simulation results of a ballistic target tracking application are shown in the end to demonstrate the validity of our analysis.« less

  9. Sensor fusion-based security concept on airports with a rotating millimetre wave person scanner

    NASA Astrophysics Data System (ADS)

    Hantscher, Sebastian; Lang, Stefan; Hägelen, Manfred; Essen, Helmut; Tessmann, Axel

    2010-10-01

    This paper gives an overview about a new security concept on airports. Because single systems have not often the desired reliability, the concept is based on the fusion of different sensors. Moreover, first measurements of a 94 GHz person scanner with circular synthetic aperture are presented showing the capability to detect metallic as well as nonmetallic objects without violating the personal privacy.

  10. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  11. Embedded Relative Navigation Sensor Fusion Algorithms for Autonomous Rendezvous and Docking Missions

    NASA Technical Reports Server (NTRS)

    DeKock, Brandon K.; Betts, Kevin M.; McDuffie, James H.; Dreas, Christine B.

    2008-01-01

    bd Systems (a subsidiary of SAIC) has developed a suite of embedded relative navigation sensor fusion algorithms to enable NASA autonomous rendezvous and docking (AR&D) missions. Translational and rotational Extended Kalman Filters (EKFs) were developed for integrating measurements based on the vehicles' orbital mechanics and high-fidelity sensor error models and provide a solution with increased accuracy and robustness relative to any single relative navigation sensor. The filters were tested tinough stand-alone covariance analysis, closed-loop testing with a high-fidelity multi-body orbital simulation, and hardware-in-the-loop (HWIL) testing in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL).

  12. Design and Implementation of a Robust Sensor Data Fusion System for Unknown Signals

    NASA Astrophysics Data System (ADS)

    Kim, Younghun; Schmid, Thomas; Srivastava, Mani B.

    In this work, we present a robust sensor fusion system for exploratory data collection, exploiting the spatial redundancy in sensor networks. Unlike prior work, our system design criteria considers a heterogeneous correlated noise model and packet loss, but no prior knowledge of signal characteristics. The former two assumptions are both common signal degradation sources in sensor networks, while the latter allows exploratory data collection of unknown signals. Through both a numerical example and an experimental study on a large military site, we show that our proposed system reduces the noise in an unknown signal by 58.2% better than a comparable algorithm.

  13. Visual sensor fusion for active security in robotic industrial environments

    NASA Astrophysics Data System (ADS)

    Robla, Sandra; Llata, Jose R.; Torre-Ferrero, Carlos; Sarabia, Esther G.; Becerra, Victor; Perez-Oria, Juan

    2014-12-01

    This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.

  14. Automatic noncontact 3-dimensional gauging via sensor fusion

    NASA Astrophysics Data System (ADS)

    Buckley, Shawn; Tavormina, Joseph J.

    1993-09-01

    Manufacturers are now driving toward the increased use of automation and the goal of zero-defects. As quality is improved and defect rates approach the popularized " Six-Sigma" level (customarily 3. 4 defects per million) manual or sampled measurementtechniques limit the achievementof product quality and manufacturing cost objectives. New automated inspection and gaging technology is required for process verification and control. To be competitive in the current manufacturing environment new gaging technology must be integrated into the manufacturing process to provide on-line feedback. The co-authors are founders of CogniSense a technology company dedicated to industrial inspection and gaging applications which use non-contact sensing techniques. CogniSense is currently applying its technology in the precision metalforming and other manufacturing industries to perform automatic dimensional measurement and provide real time information used to control and fine-tune the manufacturing process. A variety of sensors are used to detect the characteristics of parts on-line as they are produced. Data from multiple sensors is " fused" and analyzed by a dedicated microcomputer which evaluates the sensory signature and calculates critical dimensions from the sensor input to determine whether parts are within the acceptable tolerance range. Pattern recognition algorithms are used to automatically select the sensors which provide the most important information about critical part characteristics and dimensions. These algorithms operate by observing the changes in sensor output as critical features of the part are varied. The decision-making algorithms

  15. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  16. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  17. LinkMind: Link Optimization in Swarming Mobile Sensor Networks

    PubMed Central

    Ngo, Trung Dung

    2011-01-01

    A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation. PMID:22164070

  18. LinkMind: link optimization in swarming mobile sensor networks.

    PubMed

    Ngo, Trung Dung

    2011-01-01

    A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation.

  19. Sensor data fusion for accurate cloud presence prediction using Dempster-Shafer evidence theory.

    PubMed

    Li, Jiaming; Luo, Suhuai; Jin, Jesse S

    2010-01-01

    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent.

  20. Sensor Data Fusion for Accurate Cloud Presence Prediction Using Dempster-Shafer Evidence Theory

    PubMed Central

    Li, Jiaming; Luo, Suhuai; Jin, Jesse S.

    2010-01-01

    Sensor data fusion technology can be used to best extract useful information from multiple sensor observations. It has been widely applied in various applications such as target tracking, surveillance, robot navigation, signal and image processing. This paper introduces a novel data fusion approach in a multiple radiation sensor environment using Dempster-Shafer evidence theory. The methodology is used to predict cloud presence based on the inputs of radiation sensors. Different radiation data have been used for the cloud prediction. The potential application areas of the algorithm include renewable power for virtual power station where the prediction of cloud presence is the most challenging issue for its photovoltaic output. The algorithm is validated by comparing the predicted cloud presence with the corresponding sunshine occurrence data that were recorded as the benchmark. Our experiments have indicated that comparing to the approaches using individual sensors, the proposed data fusion approach can increase correct rate of cloud prediction by ten percent, and decrease unknown rate of cloud prediction by twenty three percent. PMID:22163414

  1. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    PubMed

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-03-27

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  2. Summary of sensor evaluation for the Fusion ELectromagnetic Induction eXperiment (FELIX)

    SciTech Connect

    Knott, M.J.

    1982-08-01

    As part of the First Wall/Blanket/Shield Engineering Test Program, a test bed called FELIX (Fusion ELectromagnetic Induction eXperiment) is now under construction at ANL. Its purpose will be to test, evaluate, and develop computer codes for the prediction of electromagnetically induced phenomenon in a magnetic environment modeling that of a fusion reaction. Crucial to this process is the sensing and recording of the various induced effects. Sensor evaluation for FELIX has reached the point where most sensor types have been evaluated and preliminary decisions are being made as to type and quantity for the initial FELIX experiments. These early experiments, the first, flat plate experiment in particular, will be aimed at testing the sensors as well as the pertinent theories involved. The reason for these evaluations, decisions, and proof tests is the harsh electrical and magnetic environment that FELIX presents.

  3. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    NASA Technical Reports Server (NTRS)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1992-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES, AVHRR, and SSM/I sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  4. Advanced data visualization and sensor fusion: Conversion of techniques from medical imaging to Earth science

    NASA Technical Reports Server (NTRS)

    Savage, Richard C.; Chen, Chin-Tu; Pelizzari, Charles; Ramanathan, Veerabhadran

    1992-01-01

    Hughes Aircraft Company and the University of Chicago propose to transfer existing medical imaging registration algorithms to the area of multi-sensor data fusion. The University of Chicago's algorithms have been successfully demonstrated to provide pixel by pixel comparison capability for medical sensors with different characteristics. The research will attempt to fuse GOES, AVHRR, and SSM/I sensor data which will benefit a wide range of researchers. The algorithms will utilize data visualization and algorithm development tools created by Hughes in its EOSDIS prototyping. This will maximize the work on the fusion algorithms since support software (e.g. input/output routines) will already exist. The research will produce a portable software library with documentation for use by other researchers.

  5. All-IP-Ethernet architecture for real-time sensor-fusion processing

    NASA Astrophysics Data System (ADS)

    Hiraki, Kei; Inaba, Mary; Tezuka, Hiroshi; Tomari, Hisanobu; Koizumi, Kenichi; Kondo, Shuya

    2016-03-01

    Serendipter is a device that distinguishes and selects very rare particles and cells from huge amount of population. We are currently designing and constructing information processing system for a Serendipter. The information processing system for Serendipter is a kind of sensor-fusion system but with much more difficulties: To fulfill these requirements, we adopt All IP based architecture: All IP-Ethernet based data processing system consists of (1) sensor/detector directly output data as IP-Ethernet packet stream, (2) single Ethernet/TCP/IP streams by a L2 100Gbps Ethernet switch, (3) An FPGA board with 100Gbps Ethernet I/F connected to the switch and a Xeon based server. Circuits in the FPGA include 100Gbps Ethernet MAC, buffers and preprocessing, and real-time Deep learning circuits using multi-layer neural networks. Proposed All-IP architecture solves existing problem to construct large-scale sensor-fusion systems.

  6. Multi-Sensor Fusion with Interacting Multiple Model Filter for Improved Aircraft Position Accuracy

    PubMed Central

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-01-01

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715

  7. Sensor fusion for assured vision in space applications

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Krishen, Kumar

    1993-01-01

    By using emittance and reflectance radiation models, the effects of angle of observation, polarization, and spectral content are analyzed to characterize the geometrical and physical properties--reflectivity, emissivity, orientation, dielectric properties, and roughness--of a sensed surface. Based on this analysis, the use of microwave, infrared, and optical sensing is investigated to assure the perception of surfaces on a typical lunar outpost. Also, the concept of employing several sensors on a lunar outpost is explored. An approach for efficient hardware implementation of the fused sensor systems is discussed.

  8. Statistical Sensor Fusion of a 9-DOF Mems Imu for Indoor Navigation

    NASA Astrophysics Data System (ADS)

    Chow, J. C. K.

    2017-09-01

    Sensor fusion of a MEMS IMU with a magnetometer is a popular system design, because such 9-DoF (degrees of freedom) systems are capable of achieving drift-free 3D orientation tracking. However, these systems are often vulnerable to ambient magnetic distortions and lack useful position information; in the absence of external position aiding (e.g. satellite/ultra-wideband positioning systems) the dead-reckoned position accuracy from a 9-DoF MEMS IMU deteriorates rapidly due to unmodelled errors. Positioning information is valuable in many satellite-denied geomatics applications (e.g. indoor navigation, location-based services, etc.). This paper proposes an improved 9-DoF IMU indoor pose tracking method using batch optimization. By adopting a robust in-situ user self-calibration approach to model the systematic errors of the accelerometer, gyroscope, and magnetometer simultaneously in a tightly-coupled post-processed least-squares framework, the accuracy of the estimated trajectory from a 9-DoF MEMS IMU can be improved. Through a combination of relative magnetic measurement updates and a robust weight function, the method is able to tolerate a high level of magnetic distortions. The proposed auto-calibration method was tested in-use under various heterogeneous magnetic field conditions to mimic a person walking with the sensor in their pocket, a person checking their phone, and a person walking with a smartwatch. In these experiments, the presented algorithm improved the in-situ dead-reckoning orientation accuracy by 79.8-89.5 % and the dead-reckoned positioning accuracy by 72.9-92.8 %, thus reducing the relative positioning error from metre-level to decimetre-level after ten seconds of integration, without making assumptions about the user's dynamics.

  9. Multi-Source Sensor Fusion for Small Unmanned Aircraft Systems Using Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Cook, Brandon; Cohen, Kelly

    2017-01-01

    As the applications for using small Unmanned Aircraft Systems (sUAS) beyond visual line of sight (BVLOS) continue to grow in the coming years, it is imperative that intelligent sensor fusion techniques be explored. In BVLOS scenarios the vehicle position must accurately be tracked over time to ensure no two vehicles collide with one another, no vehicle crashes into surrounding structures, and to identify off-nominal scenarios. Therefore, in this study an intelligent systems approach is used to estimate the position of sUAS given a variety of sensor platforms, including, GPS, radar, and on-board detection hardware. Common research challenges include, asynchronous sensor rates and sensor reliability. In an effort to realize these challenges, techniques such as a Maximum a Posteriori estimation and a Fuzzy Logic based sensor confidence determination are used.

  10. Optimal geometry for a quartz multipurpose SPM sensor.

    PubMed

    Stirling, Julian

    2013-01-01

    We propose a geometry for a piezoelectric SPM sensor that can be used for combined AFM/LFM/STM. The sensor utilises symmetry to provide a lateral mode without the need to excite torsional modes. The symmetry allows normal and lateral motion to be completely isolated, even when introducing large tips to tune the dynamic properties to optimal values.

  11. Monitoring soil health with a sensor fusion approach

    USDA-ARS?s Scientific Manuscript database

    Sensor-based approaches to assessment and quantification of soil health are important to facilitate cost-effective, site-specific soil management. While traditional laboratory analysis is effective for assessing soil health (or soil quality) at a few sites, such an approach quickly becomes infeasibl...

  12. Fusion of Smartphone Motion Sensors for Physical Activity Recognition

    PubMed Central

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J. M.

    2014-01-01

    For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible. PMID:24919015

  13. Fusion of smartphone motion sensors for physical activity recognition.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2014-06-10

    For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.

  14. Statistical methods of combining information: Applications to sensor data fusion

    SciTech Connect

    Burr, T.

    1996-12-31

    This paper reviews some statistical approaches to combining information from multiple sources. Promising new approaches will be described, and potential applications to combining not-so-different data sources such as sensor data will be discussed. Experiences with one real data set are described.

  15. Optimal Magnetic Sensor Vests for Cardiac Source Imaging

    PubMed Central

    Lau, Stephan; Petković, Bojana; Haueisen, Jens

    2016-01-01

    Magnetocardiography (MCG) non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics. PMID:27231910

  16. Optimal Magnetic Sensor Vests for Cardiac Source Imaging.

    PubMed

    Lau, Stephan; Petković, Bojana; Haueisen, Jens

    2016-05-24

    Magnetocardiography (MCG) non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics.

  17. Wireless sensor fusion approach for monitoring chemical mechanical planarization (CMP) process

    NASA Astrophysics Data System (ADS)

    Ohri, Amit

    Chemical mechanical planarization (CMP) is used in the microelectronics and optical industries for local as well as global planarity and for producing mirror finished surfaces. Roughness (Ra), within-non-uniformity (WIWNU), and material removal rate (MRR) are the major performance variables in polishing. CMP is a complex process involving some 36 input variables. Analysis of the review of the literatures showed that static models that use process parameters are inadequate for estimating and monitoring the performance variables in the CMP process. Pad-level interactions play a major role in polishing. Sensor based monitoring techniques enables monitoring of the CMP process. Additionally, sensor fusion techniques may facilitate in improving the robustness and monitoring the process beyond using one sensor. In this work, wireless vibration (Z-axis) and temperature sensors mounted on a bench top polisher (ECOMET polisher from Buehler) are used to monitor the material removal rate (MRR) and surface finish (Ra). The wireless sensor platform has a sampling rate of 500 Hz for the vibration sensor and 4 Hz for the temperature sensor. Alumina-based alkaline slurry is used in polishing process. The process conditions include two loading conditions (10 lb and 5 lb) and two rotational speeds (500 rpm and 300 rpm). The polishing studies were conducted on a 1.6" copper samples and Microcloth pad (from Buehler). The overall approach used involves relating the various sensors signal features to MRR and Ra from the CMP process. The vibration features were extracted using statistical, frequency, and RQA (non-linear) analysis techniques. The vibration features were combined with temperature features to build multiple linear regression models. The regression fitting accuracy for the roughness model is ˜ 93% using the statistical features, such as maximum and kurtosis, time-frequency features, such as energy, nonlinear features such as LAM and Lmax and thermal features such as net

  18. Report of the Fusion Energy Sciences Advisory Committee. Panel on Integrated Simulation and Optimization of Magnetic Fusion Systems

    SciTech Connect

    Dahlburg, Jill; Corones, James; Batchelor, Donald; Bramley, Randall; Greenwald, Martin; Jardin, Stephen; Krasheninnikov, Sergei; Laub, Alan; Leboeuf, Jean-Noel; Lindl, John; Lokke, William; Rosenbluth, Marshall; Ross, David; Schnack, Dalton

    2002-11-01

    Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individual features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC

  19. Otoferlin is a calcium sensor that directly regulates SNARE-mediated membrane fusion

    PubMed Central

    Johnson, Colin P.

    2010-01-01

    Otoferlin is a large multi–C2 domain protein proposed to act as a calcium sensor that regulates synaptic vesicle exocytosis in cochlear hair cells. Although mutations in otoferlin have been associated with deafness, its contribution to neurotransmitter release is unresolved. Using recombinant proteins, we demonstrate that five of the six C2 domains of otoferlin sense calcium with apparent dissociation constants that ranged from 13–25 µM; in the presence of membranes, these apparent affinities increase by up to sevenfold. Using a reconstituted membrane fusion assay, we found that five of the six C2 domains of otoferlin stimulate membrane fusion in a calcium-dependent manner. We also demonstrate that a calcium binding–deficient form of the C2C domain is incapable of stimulating membrane fusion, further underscoring the importance of calcium for the protein’s function. These results demonstrate for the first time that otoferlin is a calcium sensor that can directly regulate soluble N-ethyl-maleimide sensitive fusion protein attachment protein receptor–mediated membrane fusion reactions. PMID:20921140

  20. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators

    PubMed Central

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-01-01

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented. PMID:26703603

  1. Neuromechanical sensor fusion yields highest accuracies in predicting ambulation mode transitions for trans-tibial amputees.

    PubMed

    Tkach, D C; Hargrove, L J

    2013-01-01

    Advances in battery and actuator technology have enabled clinical use of powered lower limb prostheses such as the BiOM Powered Ankle. To allow ambulation over various types of terrains, such devices rely on built-in mechanical sensors or manual actuation by the amputee to transition into an operational mode that is suitable for a given terrain. It is unclear if mechanical sensors alone can accurately modulate operational modes while voluntary actuation prevents seamless, naturalistic gait. Ensuring that the prosthesis is ready to accommodate new terrain types at first step is critical for user safety. EMG signals from patient's residual leg muscles may provide additional information to accurately choose the proper mode of prosthesis operation. Using a pattern recognition classifier we compared the accuracy of predicting 8 different mode transitions based on (1) prosthesis mechanical sensor output (2) EMG recorded from residual limb and (3) fusion of EMG and mechanical sensor data. Our findings indicate that the neuromechanical sensor fusion significantly decreases errors in predicting 10 mode transitions as compared to using either mechanical sensors or EMG alone (2.3±0.7% vs. 7.8±0.9% and 20.2±2.0% respectively).

  2. Robust site security using smart seismic array technology and multi-sensor data fusion

    NASA Astrophysics Data System (ADS)

    Hellickson, Dean; Richards, Paul; Reynolds, Zane; Keener, Joshua

    2010-04-01

    Traditional site security systems are susceptible to high individual sensor nuisance alarm rates that reduce the overall system effectiveness. Visual assessment of intrusions can be intensive and manually difficult as cameras are slewed by the system to non intrusion areas or as operators respond to nuisance alarms. Very little system intrusion performance data are available other than discrete sensor alarm indications that provide no real value. This paper discusses the system architecture, integration and display of a multi-sensor data fused system for wide area surveillance, local site intrusion detection and intrusion classification. The incorporation of a novel seismic array of smart sensors using FK Beamforming processing that greatly enhances the overall system detection and classification performance of the system is discussed. Recent test data demonstrates the performance of the seismic array within several different installations and its ability to classify and track moving targets at significant standoff distances with exceptional immunity to background clutter and noise. Multi-sensor data fusion is applied across a suite of complimentary sensors eliminating almost all nuisance alarms while integrating within a geographical information system to feed a visual-fusion display of the area being secured. Real-time sensor detection and intrusion classification data is presented within a visual-fusion display providing greatly enhanced situational awareness, system performance information and real-time assessment of intrusions and situations of interest with limited security operator involvement. This approach scales from a small local perimeter to very large geographical area and can be used across multiple sites controlled at a single command and control station.

  3. Geometrical optimization of a local ballistic magnetic sensor

    SciTech Connect

    Kanda, Yuhsuke; Hara, Masahiro; Nomura, Tatsuya; Kimura, Takashi

    2014-04-07

    We have developed a highly sensitive local magnetic sensor by using a ballistic transport property in a two-dimensional conductor. A semiclassical simulation reveals that the sensitivity increases when the geometry of the sensor and the spatial distribution of the local field are optimized. We have also experimentally demonstrated a clear observation of a magnetization process in a permalloy dot whose size is much smaller than the size of an optimized ballistic magnetic sensor fabricated from a GaAs/AlGaAs two-dimensional electron gas.

  4. MTS in false positive reduction for multi-sensor fusion

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Gosnell, Michael; Cudney, Elizabeth

    2014-05-01

    The Mahalanobis Taguchi System (MTS) is a relatively new tool in the vehicle health maintenance domain, but has some distinct advantages in current multi-sensor implementations. The use of Mahalanobis Spaces (MS) allows the algorithm to identify characteristics of sensor signals to identify behaviors in machines. MTS is extremely powerful with the caveat that the correct variables are selected to form the MS. In this research work, 56 sensors monitor various aspects of the vehicles. Typically, using the MTS process, identification of useful variables is preceded by validation of the measurements scale. However, the MTS approach doesn't directly include any mitigating steps should the measurement scale not be validated. Existing work has performed outlier removal in construction of the MS, which can lead to better validation. In our approach, we modify the outlier removal process with more liberal definitions of outliers to better identify variables' impact prior to identification of useful variables. This subtle change substantially lowered the false positive rate due to the fact that additional variables were retained. Traditional MTS approaches identify useful variables only to the extent they provide usefulness in identifying the positive (abnormal) condition. The impact of removing false negatives is not included. Initial results show our approach can reduce false positive values while still maintaining complete fault identification for this vehicle data set.

  5. A Weighted Belief Entropy-Based Uncertainty Measure for Multi-Sensor Data Fusion.

    PubMed

    Tang, Yongchuan; Zhou, Deyun; Xu, Shuai; He, Zichang

    2017-04-22

    In real applications, how to measure the uncertain degree of sensor reports before applying sensor data fusion is a big challenge. In this paper, in the frame of Dempster-Shafer evidence theory, a weighted belief entropy based on Deng entropy is proposed to quantify the uncertainty of uncertain information. The weight of the proposed belief entropy is based on the relative scale of a proposition with regard to the frame of discernment (FOD). Compared with some other uncertainty measures in Dempster-Shafer framework, the new measure focuses on the uncertain information represented by not only the mass function, but also the scale of the FOD, which means less information loss in information processing. After that, a new multi-sensor data fusion approach based on the weighted belief entropy is proposed. The rationality and superiority of the new multi-sensor data fusion method is verified according to an experiment on artificial data and an application on fault diagnosis of a motor rotor.

  6. Inertial and optical sensor fusion to compensate for partial occlusions in surgical tracking systems

    NASA Astrophysics Data System (ADS)

    He, Changyu; Liu, Yue

    2015-08-01

    To solve the occlusion problem in optical tracking system (OTS) for surgical navigation, this paper proposes a sensor fusion approach and an adaptive display method to handle cases where partial or total occlusion occurs. In the sensor fusion approach, the full 6D pose information provided by the optical tracker is used to estimate the bias of the inertial sensors when all of the markers are visible. When partial occlusion occurs, the optical system can track the position of at least one marker which can be combined with the orientation estimated from the inertial measurements to recover the full 6D pose information. When all the markers are invisible, the position tracking will be realized based on outputs of the Inertial Measurement Unit (IMU) which may generate increasing drifting error. To alert the user when the drifting error is great enough to influence the navigation, the images adaptive to the drifting error are displayed in the field of the user's view. The experiments are performed with an augmented reality HMD which displays the AR images and the hybrid tracking system (HTS) which consists of an OTS and an IMU. Experimental result shows that with proposed sensor fusion approach the 6D pose of the head with respect to the reference frame can be estimated even under partial occlusion conditions. With the help of the proposed adaptive display method, the users can recover the scene of markers when the error is considered to be relatively high.

  7. Sensor data monitoring and decision level fusion scheme for early fire detection

    NASA Astrophysics Data System (ADS)

    Rizogiannis, Constantinos; Thanos, Konstantinos Georgios; Astyakopoulos, Alkiviadis; Kyriazanos, Dimitris M.; Thomopoulos, Stelios C. A.

    2017-05-01

    The aim of this paper is to present the sensor monitoring and decision level fusion scheme for early fire detection which has been developed in the context of the AF3 Advanced Forest Fire Fighting European FP7 research project, adopted specifically in the OCULUS-Fire control and command system and tested during a firefighting field test in Greece with prescribed real fire, generating early-warning detection alerts and notifications. For this purpose and in order to improve the reliability of the fire detection system, a two-level fusion scheme is developed exploiting a variety of observation solutions from air e.g. UAV infrared cameras, ground e.g. meteorological and atmospheric sensors and ancillary sources e.g. public information channels, citizens smartphone applications and social media. In the first level, a change point detection technique is applied to detect changes in the mean value of each measured parameter by the ground sensors such as temperature, humidity and CO2 and then the Rate-of-Rise of each changed parameter is calculated. In the second level the fire event Basic Probability Assignment (BPA) function is determined for each ground sensor using Fuzzy-logic theory and then the corresponding mass values are combined in a decision level fusion process using Evidential Reasoning theory to estimate the final fire event probability.

  8. Discrete-event requirements model for sensor fusion to provide real-time diagnostic feedback

    NASA Astrophysics Data System (ADS)

    Rokonuzzaman, Mohd; Gosine, Raymond G.

    1998-06-01

    Minimally-invasive surgical techniques reduce the size of the access corridor and affected zones resulting in limited real-time perceptual information available to the practitioners. A real-time feedback system is required to offset deficiencies in perceptual information. This feedback system acquires data from multiple sensors and fuses these data to extract pertinent information within defined time windows. To perform this task, a set of computing components interact with each other resulting in a discrete event dynamic system. In this work, a new discrete event requirements model for sensor fusion has been proposed to ensure logical and temporal correctness of the operation of the real-time diagnostic feedback system. This proposed scheme models system requirements as a Petri net based discrete event dynamic machine. The graphical representation and quantitative analysis of this model has been developed. Having a natural graphical property, this Petri net based model enables the requirements engineer to communicate intuitively with the client to avoid faults in the early phase of the development process. The quantitative analysis helps justify the logical and temporal correctness of the operation of the system. It has been shown that this model can be analyzed to check the presence of deadlock, reachability, and repetitiveness of the operation of the sensor fusion system. This proposed novel technique to model the requirements of sensor fusion as a discrete event dynamic system has the potential to realize highly reliable real-time diagnostic feedback system for many applications, such as minimally invasive instrumentation.

  9. Multi-Sensor Data Fusion Using a Relevance Vector Machine Based on an Ant Colony for Gearbox Fault Detection

    PubMed Central

    Liu, Zhiwen; Guo, Wei; Tang, Zhangchun; Chen, Yongqiang

    2015-01-01

    Sensors play an important role in the modern manufacturing and industrial processes. Their reliability is vital to ensure reliable and accurate information for condition based maintenance. For the gearbox, the critical machine component in the rotating machinery, the vibration signals collected by sensors are usually noisy. At the same time, the fault detection results based on the vibration signals from a single sensor may be unreliable and unstable. To solve this problem, this paper proposes an intelligent multi-sensor data fusion method using the relevance vector machine (RVM) based on an ant colony optimization algorithm (ACO-RVM) for gearboxes’ fault detection. RVM is a sparse probability model based on support vector machine (SVM). RVM not only has higher detection accuracy, but also better real-time accuracy compared with SVM. The ACO algorithm is used to determine kernel parameters of RVM. Moreover, the ensemble empirical mode decomposition (EEMD) is applied to preprocess the raw vibration signals to eliminate the influence caused by noise and other unrelated signals. The distance evaluation technique (DET) is employed to select dominant features as input of the ACO-RVM, so that the redundancy and inference in a large amount of features can be removed. Two gearboxes are used to demonstrate the performance of the proposed method. The experimental results show that the ACO-RVM has higher fault detection accuracy than the RVM with normal the cross-validation (CV). PMID:26334280

  10. Multi-Sensor Data Fusion Using a Relevance Vector Machine Based on an Ant Colony for Gearbox Fault Detection.

    PubMed

    Liu, Zhiwen; Guo, Wei; Tang, Zhangchun; Chen, Yongqiang

    2015-08-31

    Sensors play an important role in the modern manufacturing and industrial processes. Their reliability is vital to ensure reliable and accurate information for condition based maintenance. For the gearbox, the critical machine component in the rotating machinery, the vibration signals collected by sensors are usually noisy. At the same time, the fault detection results based on the vibration signals from a single sensor may be unreliable and unstable. To solve this problem, this paper proposes an intelligent multi-sensor data fusion method using the relevance vector machine (RVM) based on an ant colony optimization algorithm (ACO-RVM) for gearboxes' fault detection. RVM is a sparse probability model based on support vector machine (SVM). RVM not only has higher detection accuracy, but also better real-time accuracy compared with SVM. The ACO algorithm is used to determine kernel parameters of RVM. Moreover, the ensemble empirical mode decomposition (EEMD) is applied to preprocess the raw vibration signals to eliminate the influence caused by noise and other unrelated signals. The distance evaluation technique (DET) is employed to select dominant features as input of the ACO-RVM, so that the redundancy and inference in a large amount of features can be removed. Two gearboxes are used to demonstrate the performance of the proposed method. The experimental results show that the ACO-RVM has higher fault detection accuracy than the RVM with normal the cross-validation (CV).

  11. Unsteady flow sensing and optimal sensor placement using machine learning

    NASA Astrophysics Data System (ADS)

    Semaan, Richard

    2016-11-01

    Machine learning is used to estimate the flow state and to determine the optimal sensor placement over a two-dimensional (2D) airfoil equipped with a Coanda actuator. The analysis is based on flow field data obtained from 2D unsteady Reynolds averaged Navier-Stokes (uRANS) simulations with different jet blowing intensities and actuation frequencies, characterizing different flow separation states. This study shows how the "random forests" algorithm is utilized beyond its typical usage in fluid mechanics estimating the flow state to determine the optimal sensor placement. The results are compared against the current de-facto standard of maximum modal amplitude location and against a brute force approach that scans all possible sensor combinations. The results show that it is possible to simultaneously infer the state of flow and to determine the optimal sensor location without the need to perform proper orthogonal decomposition. Collaborative Research Center (CRC) 880, DFG.

  12. A sensor fusion field experiment in forest ecosystem dynamics

    NASA Technical Reports Server (NTRS)

    Smith, James A.; Ranson, K. Jon; Williams, Darrel L.; Levine, Elissa R.; Goltz, Stewart M.

    1990-01-01

    The background of the Forest Ecosystem Dynamics field campaign is presented, a progress report on the analysis of the collected data and related modeling activities is provided, and plans for future experiments at different points in the phenological cycle are outlined. The ecological overview of the study site is presented, and attention is focused on forest stands, needles, and atmospheric measurements. Sensor deployment and thermal and microwave observations are discussed, along with two examples of the optical radiation measurements obtained during the experiment in support of radiative transfer modeling. Future activities pertaining to an archival system, synthetic aperture radar, carbon acquisition modeling, and upcoming field experiments are considered.

  13. A sensor fusion field experiment in forest ecosystem dynamics

    NASA Technical Reports Server (NTRS)

    Smith, James A.; Ranson, K. Jon; Williams, Darrel L.; Levine, Elissa R.; Goltz, Stewart M.

    1990-01-01

    The background of the Forest Ecosystem Dynamics field campaign is presented, a progress report on the analysis of the collected data and related modeling activities is provided, and plans for future experiments at different points in the phenological cycle are outlined. The ecological overview of the study site is presented, and attention is focused on forest stands, needles, and atmospheric measurements. Sensor deployment and thermal and microwave observations are discussed, along with two examples of the optical radiation measurements obtained during the experiment in support of radiative transfer modeling. Future activities pertaining to an archival system, synthetic aperture radar, carbon acquisition modeling, and upcoming field experiments are considered.

  14. A Bayesian tracker for multi-sensor passive narrowband fusion

    NASA Astrophysics Data System (ADS)

    Pirkl, Ryan J.; Aughenbaugh, Jason M.

    2016-05-01

    We demonstrate the detection and localization performance of a multi-sensor, passive sonar Bayesian tracker for underwater targets emitting narrowband signals in the presence of realistic underwater ambient noise. Our evaluation focuses on recent advances in the formulation of the likelihood function used by the tracker that provide greater robustness in the presence of both realistic environmental noise and imprecise/inaccurate a priori knowledge of the target's narrowband signal. These improvements enable the tracker to reliably detect and localize narrowband emitters for a broader range of propagation environments, target velocities, and inherent uncertainty in a priori knowledge.

  15. Integrating event detection system operation characteristics into sensor placement optimization.

    SciTech Connect

    Hart, William Eugene; McKenna, Sean Andrew; Phillips, Cynthia Ann; Murray, Regan Elizabeth; Hart, David Blaine

    2010-05-01

    We consider the problem of placing sensors in a municipal water network when we can choose both the location of sensors and the sensitivity and specificity of the contamination warning system. Sensor stations in a municipal water distribution network continuously send sensor output information to a centralized computing facility, and event detection systems at the control center determine when to signal an anomaly worthy of response. Although most sensor placement research has assumed perfect anomaly detection, signal analysis software has parameters that control the tradeoff between false alarms and false negatives. We describe a nonlinear sensor placement formulation, which we heuristically optimize with a linear approximation that can be solved as a mixed-integer linear program. We report the results of initial experiments on a real network and discuss tradeoffs between early detection of contamination incidents, and control of false alarms.

  16. Knowledge-based imaging-sensor fusion system

    NASA Astrophysics Data System (ADS)

    Westrom, George

    1989-11-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  17. Unique sensor fusion system for coordinate-measuring machine tasks

    NASA Astrophysics Data System (ADS)

    Nashman, Marilyn; Yoshimi, Billibon; Hong, Tsai Hong; Rippey, William G.; Herman, Martin

    1997-09-01

    This paper describes a real-time hierarchical system that fuses data from vision and touch sensors to improve the performance of a coordinate measuring machine (CMM) used for dimensional inspection tasks. The system consists of sensory processing, world modeling, and task decomposition modules. It uses the strengths of each sensor -- the precision of the CMM scales and the analog touch probe and the global information provided by the low resolution camera -- to improve the speed and flexibility of the inspection task. In the experiment described, the vision module performs all computations in image coordinate space. The part's boundaries are extracted during an initialization process and then the probe's position is continuously updated as it scans and measures the part surface. The system fuses the estimated probe velocity and distance to the part boundary in image coordinates with the estimated velocity and probe position provided by the CMM controller. The fused information provides feedback to the monitor controller as it guides the touch probe to scan the part. We also discuss integrating information from the vision system and the probe to autonomously collect data for 2-D to 3-D calibration, and work to register computer aided design (CAD) models with images of parts in the workplace.

  18. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  19. Efficient Sensor Placement Optimization Using Gradient Descent and Probabilistic Coverage

    PubMed Central

    Akbarzadeh, Vahab; Lévesque, Julien-Charles; Gagné, Christian; Parizeau, Marc

    2014-01-01

    We are proposing an adaptation of the gradient descent method to optimize the position and orientation of sensors for the sensor placement problem. The novelty of the proposed method lies in the combination of gradient descent optimization with a realistic model, which considers both the topography of the environment and a set of sensors with directional probabilistic sensing. The performance of this approach is compared with two other black box optimization methods over area coverage and processing time. Results show that our proposed method produces competitive results on smaller maps and superior results on larger maps, while requiring much less computation than the other optimization methods to which it has been compared. PMID:25196164

  20. Decision-level fusion of SAR and IR sensor information for automatic target detection

    NASA Astrophysics Data System (ADS)

    Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon

    2017-05-01

    We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.

  1. Data fusion of the super saturation sensor of magma in sugar boiling process

    NASA Astrophysics Data System (ADS)

    Su, Jianjun; Meng, Yanmei; Yuan, Haiying

    2005-12-01

    The super saturation of magma, a main parameter in the boiling process control, usually is measured by electrical conductivity, but the output of sensor is influenced with the pure degree and temperature of magma. To eliminate that interference, the output would be done with by data fusion method based on the neural network. Using the measured value of electrical-conductivity sensor, syrup's Brix and temperature as the inputs, the network is trained by BP algorithm to achieve the fused output. The results of test shows that the data fusion method based on the artificial neural network could effectively eliminate the change of Brix and temperature has influenced on the supersaturation, and then could attain precise and stable output value.

  2. A Bayes-Maximum Entropy method for multi-sensor data fusion

    SciTech Connect

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  3. Field of view selection for optimal airborne imaging sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Barnard, P. Werner; Fildis, Halidun; Erbudak, Mustafa; Senger, Tolga; Alpman, Mehmet E.

    2014-05-01

    The choice of the Field of View (FOV) of imaging sensors used in airborne targeting applications has major impact on the overall performance of the system. Conducting a market survey from published data on sensors used in stabilized airborne targeting systems shows a trend of ever narrowing FOVs housed in smaller and lighter volumes. This approach promotes the ever increasing geometric resolution provided by narrower FOVs, while it seemingly ignores the influences the FOV selection has on the sensor's sensitivity, the effects of diffraction, the influences of sight line jitter and collectively the overall system performance. This paper presents a trade-off methodology to select the optimal FOV for an imaging sensor that is limited in aperture diameter by mechanical constraints (such as space/volume available and window size) by balancing the influences FOV has on sensitivity and resolution and thereby optimizing the system's performance. The methodology may be applied to staring array based imaging sensors across all wavebands from visible/day cameras through to long wave infrared thermal imagers. Some examples of sensor analysis applying the trade-off methodology are given that highlights the performance advantages that can be gained by maximizing the aperture diameters and choosing the optimal FOV for an imaging sensor used in airborne targeting applications.

  4. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  5. Sensor fusion IV: Control paradigms and data structures; Proceedings of the Meeting, Boston, MA, Nov. 12-15, 1991

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.

  6. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    NASA Astrophysics Data System (ADS)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  7. Sensor fusion: Spatial reasoning and scene interpretation; Proceedings of the Meeting, Cambridge, MA, Nov. 7-9, 1988

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1989-01-01

    The present conference discusses topics in the fusion of active and passive sensors, object estimation and verification, three-dimensional representation and knowledge integration, three-dimensional perception from multisensor data, the representation of uncertainty in multisensor fusion, and sensor calibration and registration. Also discussed are the areas of multisensor target detection and classification, multisensor processing architectures, knowledge structures and spatial reasoning, sensory interfaces to telerobotic systems, and navigation with spatial data bases.

  8. Sensor fusion: Spatial reasoning and scene interpretation; Proceedings of the Meeting, Cambridge, MA, Nov. 7-9, 1988

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1989-01-01

    The present conference discusses topics in the fusion of active and passive sensors, object estimation and verification, three-dimensional representation and knowledge integration, three-dimensional perception from multisensor data, the representation of uncertainty in multisensor fusion, and sensor calibration and registration. Also discussed are the areas of multisensor target detection and classification, multisensor processing architectures, knowledge structures and spatial reasoning, sensory interfaces to telerobotic systems, and navigation with spatial data bases.

  9. Comparison of 2D and 3D Displays and Sensor Fusion for Threat Detection, Surveillance, and Telepresence

    DTIC Science & Technology

    2003-05-19

    Comparison of 2D and 3D displays and sensor fusion for threat detection, surveillance, and telepresence T. Meitzler, Ph. D.a, D. Bednarz, Ph.D.a, K...camouflaged threats are compared on a two dimensional (2D) display and a three dimensional ( 3D ) display. A 3D display is compared alongside a 2D...technologies that take advantage of 3D and sensor fusion will be discussed. 1. INTRODUCTION Computer driven interactive 3D imaging has made

  10. Particle swarm optimization for the clustering of wireless sensors

    NASA Astrophysics Data System (ADS)

    Tillett, Jason C.; Rao, Raghuveer M.; Sahin, Ferat; Rao, T. M.

    2003-07-01

    Clustering is necessary for data aggregation, hierarchical routing, optimizing sleep patterns, election of extremal sensors, optimizing coverage and resource allocation, reuse of frequency bands and codes, and conserving energy. Optimal clustering is typically an NP-hard problem. Solutions to NP-hard problems involve searches through vast spaces of possible solutions. Evolutionary algorithms have been applied successfully to a variety of NP-hard problems. We explore one such approach, Particle Swarm Optimization (PSO), an evolutionary programming technique where a 'swarm' of test solutions, analogous to a natural swarm of bees, ants or termites, is allowed to interact and cooperate to find the best solution to the given problem. We use the PSO approach to cluster sensors in a sensor network. The energy efficiency of our clustering in a data-aggregation type sensor network deployment is tested using a modified LEACH-C code. The PSO technique with a recursive bisection algorithm is tested against random search and simulated annealing; the PSO technique is shown to be robust. We further investigate developing a distributed version of the PSO algorithm for clustering optimally a wireless sensor network.

  11. Detecting clustered chem/bio signals in noisy sensor feeds using adaptive fusion

    NASA Astrophysics Data System (ADS)

    Lundberg, Scott; Calderon, Chris; Paffenroth, Randy

    2012-05-01

    Chemical and biological monitoring systems are faced with the challenge of detecting weak signals from contam- inants of interest while at the same time maintaining extremely low false alarm rates. We present methods to control the number of false alarms while maintaining power to detect; evaluating these methods on a fixed sensor grid. Contaminants are detected using signals produced from underlying sensor-specific detection algorithms. By learning from past data, an adaptive background model is constructed and used with a multi-hypothesis testing method to control the false alarm rate. Detection methods for chemical/biological releases often depend on specific models for release types and missed detection rates at the sensors. This can be problematic in field situations where environment specific effects can alter both a sensor's false alarm and missed detection characteristics. Using field data, the false alarm statistics of a given sensor can be learned and used for inference; however the missed detection statistics for a sensor are not observable while in the field. As a result, we pursue methods that do not rely on accurate estimates of a sensor's missed detection rate. This leads to the development of the Adaptive Regions Method that under certain assumptions is designed to conservatively control the expected rate of false alarms produced by a fusion system over time, while maintaining power to detect.

  12. Optimal scheduling of multiple sensors in continuous time.

    PubMed

    Wu, Xiang; Zhang, Kanjian; Sun, Changyin

    2014-05-01

    This paper considers an optimal sensor scheduling problem in continuous time. In order to make the model more close to the practical problems, suppose that the following conditions are satisfied: only one sensor may be active at any one time; an admissible sensor schedule is a piecewise constant function with a finite number of switches; and each sensor either doesn't operate or operates for a minimum non-negligible amount of time. However, the switching times are unknown, and the feasible region isn't connected. Thus, it's difficult to solve the problem by conventional optimization techniques. To overcome this difficulty, by combining a binary relaxation, a time-scaling transformation and an exact penalty function, an algorithm is developed for solving this problem. Numerical results show that the algorithm is effective. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Optimal sensor location for parameter identification in soft clay

    NASA Astrophysics Data System (ADS)

    Hölter, R.; Mahmoudi, E.; Schanz, T.

    2015-10-01

    Performing parameter identification for model calibration prior to numerical simulation is an essential task in geotechnical engineering. However, it has to be kept in mind that the accuracy of the obtained parameter is closely related to the chosen experimental set-up, such as the number of sensors as well as their location. A well considered position of sensors can increase the quality of the measurement and reduce the number of monitoring points. This paper illustrates this concept by means of a loading device that is used to identify the stiffness and permeability factor of soft clays. With an initial set-up of the measurement devices the pore water pressure and the vertical displacements are recorded and used to identify the aforementioned parameters. Starting from these identified parameters, the optimal measurement set-up is investigated with a method based on global sensitivity analysis. This method shows an optimal sensor location assuming three sensors for each measured quantity.

  14. Sensor Fusion of Position- and Micro-Sensors (MEMS) integrated in a Wireless Sensor Network for movement detection in landslide areas

    NASA Astrophysics Data System (ADS)

    Arnhardt, Christian; Fernández-Steeger, Tomas; Azzam, Rafig

    2010-05-01

    Monitoring systems in landslide areas are important elements of effective Early Warning structures. Data acquisition and retrieval allows the detection of movement processes and thus is essential to generate warnings in time. Apart from the precise measurement, the reliability of data is fundamental, because outliers can trigger false alarms and leads to the loss of acceptance of such systems. For the monitoring of mass movements and their risk it is important to know, if there is movement, how fast it is and how trustworthy is the information. The joint project "Sensorbased landslide early warning system" (SLEWS) deals with these questions, and tries to improve data quality and to reduce false alarm rates, due to the combination of sensor date (sensor fusion). The project concentrates on the development of a prototypic Alarm- and Early Warning system (EWS) for different types of landslides by using various low-cost sensors, integrated in a wireless sensor network (WSN). The network consists of numerous connection points (nodes) that transfer data directly or over other nodes (Multi-Hop) in real-time to a data collection point (gateway). From there all the data packages are transmitted to a spatial data infrastructure (SDI) for further processing, analyzing and visualizing with respect to end-user specifications. The ad-hoc characteristic of the network allows the autonomous crosslinking of the nodes according to existing connections and communication strength. Due to the independent finding of new or more stable connections (self healing) a breakdown of the whole system is avoided. The bidirectional data stream enables the receiving of data from the network but also allows the transfer of commands and pointed requests into the WSN. For the detection of surface deformations in landslide areas small low-cost Micro-Electro-Mechanical-Systems (MEMS) and positionsensors from the automobile industries, different industrial applications and from other measurement

  15. Towards an Optimal Energy Consumption for Unattended Mobile Sensor Networks through Autonomous Sensor Redeployment

    PubMed Central

    Jia, Jie; Wen, Yingyou; Zhao, Dazhe

    2014-01-01

    Energy hole is an inherent problem caused by heavier traffic loads of sensor nodes nearer the sink because of more frequent data transmission, which is strongly dependent on the topology induced by the sensor deployment. In this paper, we propose an autonomous sensor redeployment algorithm to balance energy consumption and mitigate energy hole for unattended mobile sensor networks. First, with the target area divided into several equal width coronas, we present a mathematical problem modeling sensor node layout as well as transmission pattern to maximize network coverage and reduce communication cost. And then, by calculating the optimal node density for each corona to avoid energy hole, a fully distributed movement algorithm is proposed, which can achieve an optimal distribution quickly only by pushing or pulling its one-hop neighbors. The simulation results demonstrate that our algorithm achieves a much smaller average moving distance and a much longer network lifetime than existing algorithms and can eliminate the energy hole problem effectively. PMID:24949494

  16. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  17. Data Fusion from Voltammetric and Potentiometric Sensors to Build a Hybrid Electronic Tongue Applied in Classification of Beers

    NASA Astrophysics Data System (ADS)

    Haddi, Zouhair; Amari, Aziz; Bouchikhi, Benachir; Gutiérrez, Juan Manuel; Cetó, Xavier; Mimendia, Aitor; del Valle, Manel

    2011-09-01

    A hybrid electronic tongue based on data fusion of two different sensor families was built and used to recognize three types of beer. The employed sensor array was formed by three modified graphite-epoxy voltammetric sensors plus six potentiometric sensors with cross-sensitivity. The sensors array coupled with feature extraction and pattern recognition methods, namely Principal Component Analysis (PCA) and Discriminant Factor Analysis (DFA), were trained to classify the data clusters related to different beer types. PCA was used to visualize the different categories of taste profiles and DFA with leave-one-out cross validation approach permitted the qualitative classification. According to the DFA model, 96% of beer samples were correctly classified. The aim of this work is to prove performance of hybrid electronic tongue systems by exploiting the new approach of data fusion of different sensor families, in comparison of electronic tongue with only one sensor type.

  18. Shape optimization for a composite crack-length sensor

    NASA Technical Reports Server (NTRS)

    Prabhakaran, R.; Ramasamy, M.

    1992-01-01

    Techniques for improving the sensitivity of a carbon powder-polymer composite continuous crack-length sensor are examined. These conductive polymeric sensors can be used as crack-length gages, the shape of which can be varied in certain ways to improve their sensitivity. It is concluded that the gages with optimized or tapered shape are capable of overcoming the principal disadvantage of rectangular gages, i.e., poor sensitivity at small crack length.

  19. Optimal placement of excitations and sensors by simulated annealing

    NASA Technical Reports Server (NTRS)

    Salama, Moktar; Bruno, R.; Chen, G.-S.; Garba, J.

    1989-01-01

    The optimal placement of discrete actuators and sensors is posed as a combinatorial optimization problem. Two examples for truss structures were used for illustration; the first dealt with the optimal placement of passive dampers along existing truss members, and the second dealt with the optimal placement of a combination of a set of actuators and a set of sensors. Except for the simplest problems, an exact solution by enumeration involves a very large number of function evaluations, and is therefore computationally intractable. By contrast, the simulated annealing heuristic involves far fewer evaluations and is best suited for the class of problems considered. As an optimization tool, the effectiveness of the algorithm is enhanced by introducing a number of rules that incorporate knowledge about the physical behavior of the problem. Some of the suggested rules are necessarily problem dependent.

  20. Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas

    NASA Technical Reports Server (NTRS)

    Young, D. T.

    1993-01-01

    The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.

  1. Development of Miniaturized Optimized Smart Sensors (MOSS) for space plasmas

    NASA Technical Reports Server (NTRS)

    Young, D. T.

    1993-01-01

    The cost of space plasma sensors is high for several reasons: (1) Most are one-of-a-kind and state-of-the-art, (2) the cost of launch to orbit is high, (3) ruggedness and reliability requirements lead to costly development and test programs, and (4) overhead is added by overly elaborate or generalized spacecraft interface requirements. Possible approaches to reducing costs include development of small 'sensors' (defined as including all necessary optics, detectors, and related electronics) that will ultimately lead to cheaper missions by reducing (2), improving (3), and, through work with spacecraft designers, reducing (4). Despite this logical approach, there is no guarantee that smaller sensors are necessarily either better or cheaper. We have previously advocated applying analytical 'quality factors' to plasma sensors (and spacecraft) and have begun to develop miniaturized particle optical systems by applying quantitative optimization criteria. We are currently designing a Miniaturized Optimized Smart Sensor (MOSS) in which miniaturized electronics (e.g., employing new power supply topology and extensive us of gate arrays and hybrid circuits) are fully integrated with newly developed particle optics to give significant savings in volume and mass. The goal of the SwRI MOSS program is development of a fully self-contained and functional plasma sensor weighing 1 lb and requiring 1 W. MOSS will require only a typical spacecraft DC power source (e.g., 30 V) and command/data interfaces in order to be fully functional, and will provide measurement capabilities comparable in most ways to current sensors.

  2. Steam distribution and energy delivery optimization using wireless sensors

    SciTech Connect

    Olama, Mohammed M; Allgood, Glenn O; Kuruganti, Phani Teja; Sukumar, Sreenivas R; Djouadi, Seddik M; Lake, Joe E

    2011-01-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  3. Optimal configuration of redundant inertial sensors for navigation and FDI performance.

    PubMed

    Shim, Duk-Sun; Yang, Cheol-Kwan

    2010-01-01

    This paper considers the optimal sensor configuration for inertial navigation systems which have redundant inertial sensors such as gyroscopes and accelerometers. We suggest a method to determine the optimal sensor configuration which considers both the navigation and FDI performance. Monte Carlo simulations are performed to show the performance of the suggested optimal sensor configuration method.

  4. Optimal Configuration of Redundant Inertial Sensors for Navigation and FDI Performance

    PubMed Central

    Shim, Duk-Sun; Yang, Cheol-Kwan

    2010-01-01

    This paper considers the optimal sensor configuration for inertial navigation systems which have redundant inertial sensors such as gyroscopes and accelerometers. We suggest a method to determine the optimal sensor configuration which considers both the navigation and FDI performance. Monte Carlo simulations are performed to show the performance of the suggested optimal sensor configuration method. PMID:22163563

  5. An automatic fall detection framework using data fusion of Doppler radar and motion sensor network.

    PubMed

    Liu, Liang; Popescu, Mihail; Skubic, Marjorie; Rantz, Marilyn

    2014-01-01

    This paper describes the ongoing work of detecting falls in independent living senior apartments. We have developed a fall detection system with Doppler radar sensor and implemented ceiling radar in real senior apartments. However, the detection accuracy on real world data is affected by false alarms inherent in the real living environment, such as motions from visitors. To solve this issue, this paper proposes an improved framework by fusing the Doppler radar sensor result with a motion sensor network. As a result, performance is significantly improved after the data fusion by discarding the false alarms generated by visitors. The improvement of this new method is tested on one week of continuous data from an actual elderly person who frequently falls while living in her senior home.

  6. Position Estimation by Wearable Walking Navigation System for Visually Impaired with Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Watanabe, Hiromi; Yamamoto, Yoshihiko; Tanzawa, Tsutomu; Kotani, Shinji

    A wearable walking navigation system without any special infrastructures has been developed to guide visually impaired. It is important to estimate a position correctly so that safe navigation can be realized. In our system, different sensor data are fused to estimate a pedestrian's position. An image processing system and a laser range finder were used to estimate the positions indoors. In this paper, we introduce the concept of “similarity” between map information and sensor data. This similarity is used to estimate the positions. Experimental results show that highly accurate position estimation can be achieved by sensor fusion. The positions in a linear passage were estimated using image processing data, and when the passage turns, the positions were estimated using LRF data.

  7. Optimization of multispectral sensors for bathymetry applications

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Byrnes, H. J.

    1986-01-01

    The Naval Oceanographic office has proposed to augment current capabilities with an airborne MSS system capable of conducting hydrographic surveys of shallow and clear oceanic waters for purposes of determining ocean depth and identifying marine hazards. Recent efforts have concentrated on development of an active/passive system, where the active system will be used to calibrate a passive multispectral sensor. In this paper, parameters which influence collection-system design and depth-extraction techniques have been used to describe the practical bounds to which MSS technology can support coastal bathymetric surveying. Performance is estimated in terms of expected S/N and depth-extraction errors.

  8. Optimizing Sensor and Actuator Arrays for ASAC Noise Control

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Cabell, Ran

    2000-01-01

    This paper summarizes the development of an approach to optimizing the locations for arrays of sensors and actuators in active noise control systems. A type of directed combinatorial search, called Tabu Search, is used to select an optimal configuration from a much larger set of candidate locations. The benefit of using an optimized set is demonstrated. The importance of limiting actuator forces to realistic levels when evaluating the cost function is discussed. Results of flight testing an optimized system are presented. Although the technique has been applied primarily to Active Structural Acoustic Control systems, it can be adapted for use in other active noise control implementations.

  9. Joint UK-Australian Space Surveillance Target Tracking, Cueing and Sensor Data Fusion Experiment

    NASA Astrophysics Data System (ADS)

    Donnelly, P.; Harwood, N.; Ash, A.; Eastment, J.; Ladd, D.; Walden, C.; Bennett, J.; Smith, C.; Ritchie, I.; Rutten, M.; Gordon, N.

    2014-09-01

    In February 2014 the UK and Australia carried out a joint space surveillance target tracking, cueing, and sensor data fusion experiment. Four organisations were involved, these being the UK Defence Science and Technology Laboratory (DSTL) and Science and Technology Facilities Council (STFC) with the Defence Science and Technology Organisation (DSTO) and Electro Optic Systems (EOS) of Australia. The experiment utilised the UK STFC CAMRa radar located at Chilbolton in southern England and an Australian optical camera and laser system owned and operated by EOS and located at Mount Stromlo near Canberra, Australia. An additional experimental camera owned and operated by DSTO and located at Adelaide, Australia also contributed. Three initial objectives of the experiment were all achieved, these being: 1) Use multiple CAMRa orbit passes to cue EOS optical sensor; 2) Use single CAMRa passes constrained by TLEs to cue EOS optical sensor; 3) Use EOS laser returns to provide an updated "reverse" cue for CAMRa radar. Due to the success of these three objectives, two additional objectives were also set during the trials, these being: 4) Use CAMRa orbits to cue DSTO experimental optical sensor; 5) Use CAMRa orbits to provide CAMRa self-cue. These objectives were also achieved. The experiments were performed over two one week periods with a one week separation between tracking campaigns. This paper describes the experimental programme from a top-level perspective and outlines the planning and execution of the experiment together with some initial analysis results. The main achievements and implications for use of dissimilar and geographically separated sensors for space situational awareness are highlighted. Two companion papers describe the sensor aspects of the experiment (Eastment et al.) and the data fusion aspects (Rutten et al.) respectively.

  10. Approach towards sensor placement, selection and fusion for real-time condition monitoring of precision machines

    NASA Astrophysics Data System (ADS)

    Er, Poi Voon; Teo, Chek Sing; Tan, Kok Kiong

    2016-02-01

    Moving mechanical parts in a machine will inevitably generate vibration profiles reflecting its operating conditions. Vibration profile analysis is a useful tool for real-time condition monitoring to avoid loss of performance and unwanted machine downtime. In this paper, we propose and validate an approach for sensor placement, selection and fusion for continuous machine condition monitoring. The main idea is to use a minimal series of sensors mounted at key locations of a machine to measure and infer the actual vibration spectrum at a critical point where it is not suitable to mount a sensor. The locations for sensors' mountings which are subsequently used for vibration inference are identified based on sensitivity calibration at these locations moderated with normalized Fisher Information (NFI) associated with the measurement quality of the sensor at that location. Each of the identified sensor placement location is associated with one or more sensitive frequencies for which it ranks top in terms of the moderated sensitivities calibrated. A set of Radial Basis Function (RBF), each of them associated with a range of sensitive frequencies, is used to infer the vibration at the critical point for that frequency. The overall vibration spectrum of the critical point is then fused from these components. A comprehensive set of experimental results for validation of the proposed approach is provided in the paper.

  11. Optimization of the SHX Fusion Powered Transatmospheric Propulsion Concept

    NASA Technical Reports Server (NTRS)

    Adams, Robert B.; Landrum, D. Brian

    2001-01-01

    Existing propulsion technology has not achieved cost effective payload delivery rates to low earth orbit. A fusion based propulsion system, denoted as the Simultaneous Heating and eXpansion (SHX) engine, has been proposed in earlier papers. The SHX couples energy generated by a fusion reactor to the engine flowpath by use of coherent beam emitters. A quasi-one-dimensional flow model was used to quantify the effects of area expansion and energy input on propulsive efficiency for several beam models. Entropy calculations were included to evaluate the lost work in the system.

  12. Influence of model errors in optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  13. Global optimization of cryogenic-optical sensors

    NASA Astrophysics Data System (ADS)

    Yatsenko, Vitaliy A.; Pardalos, Panos M.

    2001-12-01

    We describe a phenomenon in which a macroscopic superconducting probe, as large as 2 - 6 cm, is chaotically and magnetically levitated. We have found that, when feedback is used, the probe chaotically moves near an equilibrium state. The global optimization approach to highly sensitive measurement of weak signal is considered. Furthermore an accurate mathematical model of asymptotically stable estimation of a limiting weak noisy signal using the stochastic measurement model is considered.

  14. Optimization of multisource information fusion for resource management with remote sensing imagery: an aggregate regularization method with neural network implementation

    NASA Astrophysics Data System (ADS)

    Shkvarko, Yuriy, IV; Butenko, Sergiy

    2006-05-01

    We address a new approach to the problem of improvement of the quality of multi-grade spatial-spectral images provided by several remote sensing (RS) systems as required for environmental resource management with the use of multisource RS data. The problem of multi-spectral reconstructive imaging with multisource information fusion is stated and treated as an aggregated ill-conditioned inverse problem of reconstruction of a high-resolution image from the data provided by several sensor systems that employ the same or different image formation methods. The proposed fusionoptimization technique aggregates the experiment design regularization paradigm with neural-network-based implementation of the multisource information fusion method. The maximum entropy (ME) requirement and projection regularization constraints are posed as prior knowledge for fused reconstruction and the experiment-design regularization methodology is applied to perform the optimization of multisource information fusion. Computationally, the reconstruction and fusion are accomplished via minimization of the energy function of the proposed modified multistate Hopfield-type neural network (NN) that integrates the model parameters of all systems incorporating a priori information, aggregate multisource measurements and calibration data. The developed theory proves that the designed maximum entropy neural network (MENN) is able to solve the multisource fusion tasks without substantial complication of its computational structure independent on the number of systems to be fused. For each particular case, only the proper adjustment of the MENN's parameters (i.e. interconnection strengths and bias inputs) should be accomplished. Simulation examples are presented to illustrate the good overall performance of the fused reconstruction achieved with the developed MENN algorithm applied to the real-world multi-spectral environmental imagery.

  15. Sensor fusion methods for reducing false alarms in heart rate monitoring.

    PubMed

    Borges, Gabriel; Brusamarello, Valner

    2016-12-01

    Automatic patient monitoring is an essential resource in hospitals for good health care management. While alarms caused by abnormal physiological conditions are important for the delivery of fast treatment, they can be also a source of unnecessary noise because of false alarms caused by electromagnetic interference or motion artifacts. One significant source of false alarms is related to heart rate, which is triggered when the heart rhythm of the patient is too fast or too slow. In this work, the fusion of different physiological sensors is explored in order to create a robust heart rate estimation. A set of algorithms using heart rate variability index, Bayesian inference, neural networks, fuzzy logic and majority voting is proposed to fuse the information from the electrocardiogram, arterial blood pressure and photoplethysmogram. Three kinds of information are extracted from each source, namely, heart rate variability, the heart rate difference between sensors and the spectral analysis of low and high noise of each sensor. This information is used as input to the algorithms. Twenty recordings selected from the MIMIC database were used to validate the system. The results showed that neural networks fusion had the best false alarm reduction of 92.5 %, while the Bayesian technique had a reduction of 84.3 %, fuzzy logic 80.6 %, majority voter 72.5 % and the heart rate variability index 67.5 %. Therefore, the proposed algorithms showed good performance and could be useful in bedside monitors.

  16. Data fusion for target tracking and classification with wireless sensor network

    NASA Astrophysics Data System (ADS)

    Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic

    2016-10-01

    In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).

  17. Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization

    PubMed Central

    Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua

    2015-01-01

    Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m. PMID:25569750

  18. Fusion of WiFi, smartphone sensors and landmarks using the Kalman filter for indoor localization.

    PubMed

    Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua

    2015-01-05

    Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m.

  19. Improved Hidden Clique Detection by Optimal Linear Fusion of Multiple Adjacency Matrices

    DTIC Science & Technology

    2015-11-30

    Improved Hidden Clique Detection by Optimal Linear Fusion of Multiple Adjacency Matrices (Invited Paper) Himanshu Nayar∗, Rajmonda S. Caceres†, Kelly...where we are a given multiple Erdős- Renyi modeled adjacency matrices containing a common hidden or planted clique. The objective is to combine them...probability—we adopt a linear fusion model in which we analyze a convex combination of the adjacency matrices of the graphs. Within this context, we

  20. Hydrogeologic Data Fusion. Industry Programs/Characterization, Monitoring, and Sensor Technology Crosscut Program. OST Reference #2944

    SciTech Connect

    None, None

    1999-09-01

    Problem: The fate and transport of contaminants in the subsurface requires knowledge of the hydrogeologic system. Site characterization typically involves the collection of various data sets needed to create a conceptual model that represents what’s known about contaminant migration in the subsurface at a particular site. How Hydrogeologic Data Fusion Works Hydrogeologic Data Fusion is a mathematical tool that can be used to combine various types of geophysical, geologic, and hydrologic data from different types of sensors to estimate geologic and hydrogeologic properties. It can be especially useful at hazardous waste sites where the hydrology, geology, or contaminant distribution is significantly complex such that groundwater modeling is required to enable a reasonable and accurate prediction of subsurface conditions.

  1. On the Use of Sensor Fusion to Reduce the Impact of Rotational and Additive Noise in Human Activity Recognition

    PubMed Central

    Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2012-01-01

    The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered. PMID:22969386

  2. On the use of sensor fusion to reduce the impact of rotational and additive noise in human activity recognition.

    PubMed

    Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2012-01-01

    The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered.

  3. Design of 3D measurement system based on multi-sensor data fusion technique

    NASA Astrophysics Data System (ADS)

    Zhang, Weiguang; Han, Jun; Yu, Xun

    2009-05-01

    With the rapid development of shape measurement technique, multi-sensor approach becomes one of valid way to improve the accuracy, to expend measuring range, to reduce occlusion, to realize multi-resolution measurement, and to increase measuring speed simultaneously. Sensors in multi-sensor system can have different system parameters, and they may have different measuring range and different precision. Light sectioning method is one of useful measurement technique for 3D profile measurement. It is insensitive to the surface optical property of 3D object, has scarcely any demand on surrounding. A multi-sensor system scheme, which uses light sectioning method and multi-sensor data fusion techniques, is presented for blade of aviation engine and spiral bevel gear measurement. The system model is developed to build the relationship between measuring range & precision and system parameters. The system parameters were set according to system error analysis, measuring range and precision. The result shows that the system is more universal than it's ancestor, and that the accuracy of the system is about 0.05mm for the 60× 60mm2 measuring range, and that the system is successful for the aero-dynamical data curve of blade of aviation engine and tooth profile of spiral bevel gear measurement with 3600 multi-resolution measuring character.

  4. Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.

    PubMed

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.

  5. Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions

    PubMed Central

    Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Mª; de la Escalera, Arturo

    2010-01-01

    The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. PMID:22163639

  6. The addition of a sagittal image fusion improves the prostate cancer detection in a sensor-based MRI /ultrasound fusion guided targeted biopsy.

    PubMed

    Günzel, Karsten; Cash, Hannes; Buckendahl, John; Königbauer, Maximilian; Asbach, Patrick; Haas, Matthias; Neymeyer, Jörg; Hinz, Stefan; Miller, Kurt; Kempkensteffen, Carsten

    2017-01-13

    To explore the diagnostic benefit of an additional image fusion of the sagittal plane in addition to the standard axial image fusion, using a sensor-based MRI/US fusion platform. During July 2013 and September 2015, 251 patients with at least one suspicious lesion on mpMRI (rated by PI-RADS) were included into the analysis. All patients underwent MRI/US targeted biopsy (TB) in combination with a 10 core systematic prostate biopsy (SB). All biopsies were performed on a sensor-based fusion system. Group A included 162 men who received TB by an axial MRI/US image fusion. Group B comprised 89 men in whom the TB was performed with an additional sagittal image fusion. The median age in group A was 67 years (IQR 61-72) and in group B 68 years (IQR 60-71). The median PSA level in group A was 8.10 ng/ml (IQR 6.05-14) and in group B 8.59 ng/ml (IQR 5.65-12.32). In group A the proportion of patients with a suspicious digital rectal examination (DRE) (14 vs. 29%, p = 0.007) and the proportion of primary biopsies (33 vs 46%, p = 0.046) were significantly lower. The rate of PI-RADS 3 lesions were overrepresented in group A compared to group B (19 vs. 9%; p = 0.044). Classified according to PI-RADS 3, 4 and 5, the detection rates of TB were 42, 48, 75% in group A and 25, 74, 90% in group B. The rate of PCa with a Gleason score ≥7 missed by TB was 33% (18 cases) in group A and 9% (5 cases) in group B; p-value 0.072. An explorative multivariate binary logistic regression analysis revealed that PI-RADS, a suspicious DRE and performing an additional sagittal image fusion were significant predictors for PCa detection in TB. 9 PCa were only detected by TB with sagittal fusion (sTB) and sTB identified 10 additional clinically significant PCa (Gleason ≥7). Performing an additional sagittal image fusion besides the standard axial fusion appears to improve the accuracy of the sensor-based MRI/US fusion platform.

  7. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    To support the decision making processes of flood risk management and long term floodplain planning, a significant issue is the availability of data to build appropriate and reliable models. Often the required data for model building, calibration and validation are not sufficient or available. A unique opportunity is offered nowadays by the globally available data, which can be freely downloaded from internet. However, there remains the question of what is the real potential of those global remote sensing data, characterized by different accuracies, for global inundation monitoring and how to integrate them with inundation models. In order to monitor a reach of the River Dee (UK), a network of cheap wireless sensors (GridStix) was deployed both in the channel and in the floodplain. These sensors measure the water depth, supplying the input data for flood mapping. Besides their accuracy and reliability, their location represents a big issue, having the purpose of providing as much information as possible and at the same time as low redundancy as possible. In order to update their layout, the initial number of six sensors has been increased up to create a redundant network over the area. Through an entropy approach, the most informative and the least redundant sensors have been chosen among all. First, a simple raster-based inundation model (LISFLOOD-FP) is used to generate a synthetic GridStix data set of water stages. The Digital Elevation Model (DEM) used for hydraulic model building is the globally and freely available SRTM DEM. Second, the information content of each sensor has been compared by evaluating their marginal entropy. Those with a low marginal entropy are excluded from the process because of their low capability to provide information. Then the number of sensors has been optimized considering a Multi-Objective Optimization Problem (MOOP) with two objectives, namely maximization of the joint entropy (a measure of the information content) and

  8. Designing and Optimizing Future Spaceborne Multi-angular, Polarimetric Sensors

    NASA Astrophysics Data System (ADS)

    Petroy, S. B.; Nicholson, R. E.; D'Entremont, R. P.; Snell, H. E.

    2004-05-01

    Polarimetric measurements in the visible/near-infrared spectral region improve aerosol and cloud microphysical and compositional retrievals. The retrieval approaches exploit the unique polarimetric signatures of aerosols and clouds as function of scattering angle, thereby driving the requirement for data collection over a large range of scattering angles (ideally between 0 and 180 degrees). Scattering angle coverage is a function both of the sensor/sun/target geometry and the sensor architectural approach toward acquiring multi-angular data. These two functions must be considered when designing and implementing a spaceborne, multi-angular polarimetric sensor. The orbital geometry trade is dictated by the range of possible orbits and will quickly reduce to a subset of optimal orbital scenarios. However, the desired parameter of interest (aerosols vs. clouds properties), its spatial variability, and global extent must be considered when selecting an optimal orbit. For example, while a noon-equatorial crossing-time provides the best scattering angle coverage for the retrievals, the increased presence of clouds may preclude use of much of the data for characterizing aerosols. The sensor architectural trade investigates differing sensor approaches to providing sufficient scattering angle coverage. Current polarimetric sensor designs include both the over-lapping imagery approach (e.g. POLarization and Directionality of the Earth's Reflectances - POLDER) and the single-pixel, scanning approach (e.g. Research Scanning Polarimeter - RSP). POLDER (a spaceborne sensor) traded the benefit of image data with a large swath width against the collection of simultaneous polarimetry. RSP (an airborne sensor) collects multi-angular data by scanning the air mass during over-flight with a set of polarimetric compensating mirrors. The RSP design allows for simultaneous polarimetry and potentially very large scattering angle ranges on orbit, but is restricted to a single-pixel detector

  9. Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng

    2017-10-01

    A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.

  10. Resistive sensor and electromagnetic actuator for feedback stabilization of liquid metal walls in fusion reactors

    NASA Astrophysics Data System (ADS)

    Mirhoseini, S. M. H.; Volpe, F. A.

    2016-12-01

    Liquid metal walls in fusion reactors will be subject to instabilities, turbulence, induced currents, error fields and temperature gradients that will make them locally bulge, thus entering in contact with the plasma, or deplete, hence exposing the underlying solid substrate. To prevent this, research has begun to actively stabilize static or flowing free-surface liquid metal layers by locally applying forces in feedback with thickness measurements. Here we present resistive sensors of liquid metal thickness and demonstrate \\mathbf{j}× \\mathbf{B} actuators, to locally control it.

  11. Optimal Sensor Layouts in Underwater Locomotory Systems

    NASA Astrophysics Data System (ADS)

    Colvert, Brendan; Kanso, Eva

    2015-11-01

    Retrieving and understanding global flow characteristics from local sensory measurements is a challenging but extremely relevant problem in fields such as defense, robotics, and biomimetics. It is an inverse problem in that the goal is to translate local information into global flow properties. In this talk we present techniques for optimization of sensory layouts within the context of an idealized underwater locomotory system. Using techniques from fluid mechanics and control theory, we show that, under certain conditions, local measurements can inform the submerged body about its orientation relative to the ambient flow, and allow it to recognize local properties of shear flows. We conclude by commenting on the relevance of these findings to underwater navigation in engineered systems and live organisms.

  12. Geometry optimization for micro-pressure sensor considering dynamic interference

    SciTech Connect

    Yu, Zhongliang; Zhao, Yulong Li, Lili; Tian, Bian; Li, Cun

    2014-09-15

    Presented is the geometry optimization for piezoresistive absolute micro-pressure sensor. A figure of merit called the performance factor (PF) is defined as a quantitative index to describe the comprehensive performances of a sensor including sensitivity, resonant frequency, and acceleration interference. Three geometries are proposed through introducing islands and sensitive beams into typical flat diaphragm. The stress distributions of sensitive elements are analyzed by finite element method. Multivariate fittings based on ANSYS simulation results are performed to establish the equations about surface stress, deflection, and resonant frequency. Optimization by MATLAB is carried out to determine the dimensions of the geometries. Convex corner undercutting is evaluated. Each PF of the three geometries with the determined dimensions is calculated and compared. Silicon bulk micromachining is utilized to fabricate the prototypes of the sensors. The outputs of the sensors under both static and dynamic conditions are tested. Experimental results demonstrate the rationality of the defined performance factor and reveal that the geometry with quad islands presents the highest PF of 210.947 Hz{sup 1/4}. The favorable overall performances enable the sensor more suitable for altimetry.

  13. Optimal control design for systems with collocated sensors and actuators

    NASA Astrophysics Data System (ADS)

    Leo, Donald J.; Inman, Daniel J.

    1996-05-01

    Smart material systems enable near collocation of sensors and actuators for controlled structures. Distributed sensors and actuators, placed in close proximity to one another, yield high bandwidth control systems that exhibit passivity characteristics that can be exploited in the design of robust structural control laws. Transfer function properties of Single-Input- Single-Output (SISO) systems with collocated sensors and actuators are well understood. In this paper, analogies between the SISO case and Multiple-Input-Multiple-Output systems with collocated sensors and actuators are developed. The analogies are based on the eigenproperties of complex symmetric matrices; namely, that the eigenvectors of complex symmetric matrices are orthogonal to their simple transpose, and that the eigenvalues of complex symmetric matrices are bounded by the definiteness of their real and imaginary components. These theorems are derived and applied to the analysis and control of nongyroscopic, noncirculatory mechanical systems. Transfer matrices of mechanical systems with collocated sensors and actuators are shown to be complex symmetric matrices whose eigenproperties are determined by the type of collocated feedback. These properties are derived for both the general damping case and for the case of modal damping. An optimal control technique based on the eigenproperties of complex symmetric systems is developed. The technique is a constrainted convex optimization program that can incorporate many different types of performance and constraint specifications. The technique is derived in the paper and a design example is included.

  14. Robust optimization of contaminant sensor placement for community water systems.

    SciTech Connect

    Konjevod, Goran; Carr, Robert D.; Greenberg, Harvey J.; Hart, William Eugene; Morrison, Tod; Phillips, Cynthia Ann; Lin, Henry; Lauer, Erik

    2004-09-01

    We present a series of related robust optimization models for placing sensors in municipal water networks to detect contaminants that are maliciously or accidentally injected.We formulate sensor placement problems as mixed-integer programs, for which the objective coefficients are not known with certainty. We consider a restricted absolute robustness criteria that is motivated by natural restrictions on the uncertain data, and we define three robust optimization models that differ in how the coefficients in the objective vary. Under one set of assumptions there exists a sensor placement that is optimal for all admissible realizations of the coefficients. Under other assumptions, we can apply sorting to solve each worst-case realization efficiently, or we can apply duality to integrate the worst-case outcome and have one integer program. The most difficult case is where the objective parameters are bilinear, and we prove its complexity is NP-hard even under simplifying assumptions. We consider a relaxation that provides an approximation, giving an overall guarantee of nearoptimality when used with branch-and-bound search. We present preliminary computational experiments that illustrate the computational complexity of solving these robust formulations on sensor placement applications.

  15. Anchor Node Localization for Wireless Sensor Networks Using Video and Compass Information Fusion

    PubMed Central

    Pescaru, Dan; Curiac, Daniel-Ioan

    2014-01-01

    Distributed sensing, computing and communication capabilities of wireless sensor networks require, in most situations, an efficient node localization procedure. In the case of random deployments in harsh or hostile environments, a general localization process within global coordinates is based on a set of anchor nodes able to determine their own position using GPS receivers. In this paper we propose another anchor node localization technique that can be used when GPS devices cannot accomplish their mission or are considered to be too expensive. This novel technique is based on the fusion of video and compass data acquired by the anchor nodes and is especially suitable for video- or multimedia-based wireless sensor networks. For these types of wireless networks the presence of video cameras is intrinsic, while the presence of digital compasses is also required for identifying the cameras' orientations. PMID:24594614

  16. Reconnaissance blind multi-chess: an experimentation platform for ISR sensor fusion and resource management

    NASA Astrophysics Data System (ADS)

    Newman, Andrew J.; Richardson, Casey L.; Kain, Sean M.; Stankiewicz, Paul G.; Guseman, Paul R.; Schreurs, Blake A.; Dunne, Jeffrey A.

    2016-05-01

    This paper introduces the game of reconnaissance blind multi-chess (RBMC) as a paradigm and test bed for understanding and experimenting with autonomous decision making under uncertainty and in particular managing a network of heterogeneous Intelligence, Surveillance and Reconnaissance (ISR) sensors to maintain situational awareness informing tactical and strategic decision making. The intent is for RBMC to serve as a common reference or challenge problem in fusion and resource management of heterogeneous sensor ensembles across diverse mission areas. We have defined a basic rule set and a framework for creating more complex versions, developed a web-based software realization to serve as an experimentation platform, and developed some initial machine intelligence approaches to playing it.

  17. Regularized discriminant analysis for multi-sensor decision fusion and damage detection with Lamb waves

    NASA Astrophysics Data System (ADS)

    Mishra, Spandan; Vanli, O. Arda; Huffer, Fred W.; Jung, Sungmoon

    2016-04-01

    In this study we propose a regularized linear discriminant analysis approach for damage detection which does not require an intermediate feature extraction step and therefore more efficient in handling data with high-dimensionality. A robust discriminant model is obtained by shrinking of the covariance matrix to a diagonal matrix and thresholding redundant predictors without hurting the predictive power of the model. The shrinking and threshold parameters of the discriminant function (decision boundary) are estimated to minimize the classification error. Furthermore, it is shown how the damage classification achieved by the proposed method can be extended to multiple sensors by following a Bayesian decision-fusion formulation. The detection probability of each sensor is used as a prior condition to estimate the posterior detection probability of the entire network and the posterior detection probability is used as a quantitative basis to make the final decision about the damage.

  18. Neuromorphic Audio–Visual Sensor Fusion on a Sound-Localizing Robot

    PubMed Central

    Chan, Vincent Yue-Sek; Jin, Craig T.; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio–visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4–5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment. PMID:22347165

  19. On modeling of tool wear using sensor fusion and polynomial classifiers

    NASA Astrophysics Data System (ADS)

    Deiab, Ibrahim; Assaleh, Khaled; Hammad, Firas

    2009-07-01

    With increased global competition, the manufacturing sector is vigorously working on enhancing the efficiency of manufacturing processes in terms of cost, quality, and environmental impact. This work presents a novel approach to model and predict cutting tool wear using statistical signal analysis, pattern recognition, and sensor fusion. The data are acquired from two sources: an acoustic emission sensor (AE) and a tool post dynamometer. The pattern recognition used here is based on two methods: Artificial Neural Networks (ANN) and Polynomial Classifiers (PC). Cutting tool wear values predicted by neural network (ANN) and polynomial classifiers (PC) are compared. For the case study presented, PC proved to significantly reduce the required training time compared to that required by an ANN without compromising the prediction accuracy. The predicted results compared well with the measured tool wear values.

  20. An Inertial and Optical Sensor Fusion Approach for Six Degree-of-Freedom Pose Estimation.

    PubMed

    He, Changyu; Kazanzides, Peter; Sen, Hasan Tutkun; Kim, Sungmin; Liu, Yue

    2015-07-08

    Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions.

  1. An Inertial and Optical Sensor Fusion Approach for Six Degree-of-Freedom Pose Estimation

    PubMed Central

    He, Changyu; Kazanzides, Peter; Sen, Hasan Tutkun; Kim, Sungmin; Liu, Yue

    2015-01-01

    Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions. PMID:26184191

  2. Toward optimizing stream fusion in multistream recognition of speech.

    PubMed

    Mesgarani, Nima; Thomas, Samuel; Hermansky, Hynek

    2011-07-01

    A multistream phoneme recognition framework is proposed based on forming streams from different spectrotemporal modulations of speech. Phoneme posterior probabilities were estimated from each stream separately and combined at the output level. A statistical model of the final estimated posterior probabilities is used to characterize the system performance. During the operation, the best fusion architecture is chosen automatically to maximize the similarity of output statistics to clean condition. Results on phoneme recognition from noisy speech indicate the effectiveness of the proposed method. © 2011 Acoustical Society of America

  3. Optimization of electrochemical aptamer-based sensors via optimization of probe packing density and surface chemistry.

    PubMed

    White, Ryan J; Phares, Noelle; Lubin, Arica A; Xiao, Yi; Plaxco, Kevin W

    2008-09-16

    Electrochemical, aptamer-based (E-AB) sensors, which are comprised of an electrode modified with surface immobilized, redox-tagged DNA aptamers, have emerged as a promising new biosensor platform. In order to further improve this technology we have systematically studied the effects of probe (aptamer) packing density, the AC frequency used to interrogate the sensor, and the nature of the self-assembled monolayer (SAM) used to passivate the electrode on the performance of representative E-AB sensors directed against the small molecule cocaine and the protein thrombin. We find that, by controlling the concentration of aptamer employed during sensor fabrication, we can control the density of probe DNA molecules on the electrode surface over an order of magnitude range. Over this range, the gain of the cocaine sensor varies from 60% to 200%, with maximum gain observed near the lowest probe densities. In contrast, over a similar range, the signal change of the thrombin sensor varies from 16% to 42% and optimal signaling is observed at intermediate densities. Above cut-offs at low hertz frequencies, neither sensor displays any significant dependence on the frequency of the alternating potential employed in their interrogation. Finally, we find that E-AB signal gain is sensitive to the nature of the alkanethiol SAM employed to passivate the interrogating electrode; while thinner SAMs lead to higher absolute sensor currents, reducing the length of the SAM from 6-carbons to 2-carbons reduces the observed signal gain of our cocaine sensor 10-fold. We demonstrate that fabrication and operational parameters can be varied to achieve optimal sensor performance and that these can serve as a basic outline for future sensor fabrication.

  4. Optimization of Electrochemical Aptamer-Based Sensors via Optimization of Probe Packing Density and Surface Chemistry

    PubMed Central

    White, Ryan J.; Phares, Noelle; Lubin, Arica A.; Xiao, Yi; Plaxco, Kevin W.

    2009-01-01

    Electrochemical, aptamer-based (E-AB) sensors, which are comprised of an electrode modified with surface immobilized, redox-tagged DNA aptamers, have emerged as a promising new biosensor platform. In order to further improve this technology we have systematically studied the effects of probe (aptamer) packing density, the AC frequency used to interrogate the sensor, and the nature of the self-assembled monolayer (SAM) used to passivate the electrode on the performance of representative E-AB sensors directed against the small molecule cocaine and the protein thrombin. We find that, by controlling the concentration of aptamer employed during sensor fabrication, we can control the density of probe DNA molecules on the electrode surface over an order of magnitude range. Over this range, the gain of the cocaine sensor varies from 60% to 200%, with maximum gain observed near the lowest probe densities. In contrast, over a similar range, the signal change of the thrombin sensor varies from 16% to 42% and optimal signaling is observed at intermediate densities. Above cut-offs at low hertz frequencies, neither sensor displays any significant dependence on the frequency of the alternating potential employed in their interrogation. Finally, we find that E-AB signal gain is sensitive to the nature of the alkanethiol SAM employed to passivate the interrogating electrode; while thinner SAMs lead to higher absolute sensor currents, reducing the length of the SAM from 6-carbons to 2-carbons reduces the observed signal gain of our cocaine sensor 10-fold. We demonstrate that fabrication and operational parameters can be varied to achieve optimal sensor performance and that these can serve as a basic outline for future sensor fabrication. PMID:18690727

  5. A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.

    PubMed

    Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani

    2012-01-01

    Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.

  6. Sensor fusion: lane marking detection and autonomous intelligent cruise control system

    NASA Astrophysics Data System (ADS)

    Baret, Marc; Baillarin, S.; Calesse, C.; Martin, Lionel

    1995-12-01

    In the past few years MATRA and RENAULT have developed an Autonomous Intelligent Cruise Control (AICC) system based on a LIDAR sensor. This sensor incorporating a charge coupled device was designed to acquire pulsed laser diode emission reflected by standard car reflectors. The absence of moving mechanical parts, the large field of view, the high measurement rate and the very good accuracy for distance range and angular position of targets make this sensor very interesting. It provides the equipped car with the distance and the relative speed of other vehicles enabling the safety distance to be controlled by acting on the throttle and the automatic gear box. Experiments in various real traffic situations have shown the limitations of this kind of system especially on bends. All AICC sensors are unable to distinguish between a bend and a change of lane. This is easily understood if we consider a road without lane markings. This fact has led MATRA to improve its AICC system by providing the lane marking information. Also in the scope of the EUREKA PROMETHEUS project, MATRA and RENAULT have developed a lane keeping system in order to warn of the drivers lack of vigilance. Thus, MATRA have spread this system to far field lane marking detection and have coupled it with the AICC system. Experiments will be carried out on roads to estimate the gain in performance and comfort due to this fusion.

  7. A Method Based on Multi-Sensor Data Fusion for Fault Detection of Planetary Gearboxes

    PubMed Central

    Lei, Yaguo; Lin, Jing; He, Zhengjia; Kong, Detong

    2012-01-01

    Studies on fault detection and diagnosis of planetary gearboxes are quite limited compared with those of fixed-axis gearboxes. Different from fixed-axis gearboxes, planetary gearboxes exhibit unique behaviors, which invalidate fault diagnosis methods that work well for fixed-axis gearboxes. It is a fact that for systems as complex as planetary gearboxes, multiple sensors mounted on different locations provide complementary information on the health condition of the systems. On this basis, a fault detection method based on multi-sensor data fusion is introduced in this paper. In this method, two features developed for planetary gearboxes are used to characterize the gear health conditions, and an adaptive neuro-fuzzy inference system (ANFIS) is utilized to fuse all features from different sensors. In order to demonstrate the effectiveness of the proposed method, experiments are carried out on a planetary gearbox test rig, on which multiple accelerometers are mounted for data collection. The comparisons between the proposed method and the methods based on individual sensors show that the former achieves much higher accuracies in detecting planetary gearbox faults. PMID:22438750

  8. A method based on multi-sensor data fusion for fault detection of planetary gearboxes.

    PubMed

    Lei, Yaguo; Lin, Jing; He, Zhengjia; Kong, Detong

    2012-01-01

    Studies on fault detection and diagnosis of planetary gearboxes are quite limited compared with those of fixed-axis gearboxes. Different from fixed-axis gearboxes, planetary gearboxes exhibit unique behaviors, which invalidate fault diagnosis methods that work well for fixed-axis gearboxes. It is a fact that for systems as complex as planetary gearboxes, multiple sensors mounted on different locations provide complementary information on the health condition of the systems. On this basis, a fault detection method based on multi-sensor data fusion is introduced in this paper. In this method, two features developed for planetary gearboxes are used to characterize the gear health conditions, and an adaptive neuro-fuzzy inference system (ANFIS) is utilized to fuse all features from different sensors. In order to demonstrate the effectiveness of the proposed method, experiments are carried out on a planetary gearbox test rig, on which multiple accelerometers are mounted for data collection. The comparisons between the proposed method and the methods based on individual sensors show that the former achieves much higher accuracies in detecting planetary gearbox faults.

  9. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  10. Particle swarm optimization for optimal sensor placement in ultrasonic SHM systems

    NASA Astrophysics Data System (ADS)

    Blanloeuil, Philippe; Nurhazli, Nur A. E.; Veidt, Martin

    2016-04-01

    A Particle Swarm Optimization (PSO) algorithm is used to improve sensors placement in an ultrasonic Structural Health Monitoring (SHM) system where the detection is performed through the beam-forming imaging algorithm. The imaging algorithm reconstructs the defect image and estimates its location based on analytically generated signals, considering circular through hole damage in an aluminum plate as the tested structure. Then, the PSO algorithm changes the position of sensors to improve the accuracy of the detection. Thus, the two algorithms are working together iteratively to optimize the system configuration, taking into account a complete modeling of the SHM system. It is shown that this approach can provide good sensors placements for detection of multiple defects in the target area, and for different numbers of sensors.

  11. Optimizing High-Z Coatings for Inertial Fusion Energy Shells

    SciTech Connect

    Stephens, Elizabeth H.; Nikroo, Abbas; Goodin, Daniel T.; Petzoldt, Ronald W.

    2003-05-15

    Inertial fusion energy (IFE) reactors require shells with a high-Z coating that is both permeable, for timely filling with deuterium-tritium, and reflective, for survival in the chamber. Previously, gold was deposited on shells while they were agitated to obtain uniform, reproducible coatings. However, these coatings were rather impermeable, resulting in unacceptably long fill times. We report here on an initial study on Pd coatings on shells in the same manner. We have found that these palladium-coated shells are substantially more permeable than gold. Pd coatings on shells remained stable on exposure to deuterium. Pd coatings had lower reflectivity compared to gold that leads to a lower working temperature, and efficiency, of the proposed fusion reactor. Seeking to combine the permeability of Pd coatings and high reflectivity of gold, AuPd-alloy coatings were produced using a cosputtering technique. These alloys demonstrated higher permeability than Au and higher reflectivity than Pd. However, these coatings were still less reflective than the gold coatings. To improve the permeability of gold's coatings, permeation experiments were performed at higher temperatures. With the parameters of composition, thickness, and temperature, we have the ability to comply with a large target design window.

  12. Fusion

    NASA Astrophysics Data System (ADS)

    Herman, Robin

    1990-10-01

    The book abounds with fascinating anecdotes about fusion's rocky path: the spurious claim by Argentine dictator Juan Peron in 1951 that his country had built a working fusion reactor, the rush by the United States to drop secrecy and publicize its fusion work as a propaganda offensive after the Russian success with Sputnik; the fortune Penthouse magazine publisher Bob Guccione sank into an unconventional fusion device, the skepticism that met an assertion by two University of Utah chemists in 1989 that they had created "cold fusion" in a bottle. Aimed at a general audience, the book describes the scientific basis of controlled fusion--the fusing of atomic nuclei, under conditions hotter than the sun, to release energy. Using personal recollections of scientists involved, it traces the history of this little-known international race that began during the Cold War in secret laboratories in the United States, Great Britain and the Soviet Union, and evolved into an astonishingly open collaboration between East and West.

  13. A decision support system for fusion of hard and soft sensor information based on probabilistic latent semantic analysis technique

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Elangovan, Vinayak; Alkilani, Amjad; Habibi, Mohammad

    2013-05-01

    This paper presents an ongoing effort towards development of an intelligent Decision-Support System (iDSS) for fusion of information from multiple sources consisting of data from hard (physical sensors) and soft (textural sources. Primarily, this paper defines taxonomy of decision support systems for latent semantic data mining from heterogeneous data sources. A Probabilistic Latent Semantic Analysis (PLSA) approach is proposed for latent semantic concepts search from heterogeneous data sources. An architectural model for generating semantic annotation of multi-modality sensors in a modified Transducer Markup Language (TML) is described. A method for TML messages fusion is discussed for alignment and integration of spatiotemporally correlated and associated physical sensory observations. Lastly, the experimental results which exploit fusion of soft/hard sensor sources with support of iDSS are discussed.

  14. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  15. Trust Model of Wireless Sensor Networks and Its Application in Data Fusion

    PubMed Central

    Chen, Zhenguo; Tian, Liqin; Lin, Chuang

    2017-01-01

    In order to ensure the reliability and credibility of the data in wireless sensor networks (WSNs), this paper proposes a trust evaluation model and data fusion mechanism based on trust. First of all, it gives the model structure. Then, the calculation rules of trust are given. In the trust evaluation model, comprehensive trust consists of three parts: behavior trust, data trust, and historical trust. Data trust can be calculated by processing the sensor data. Based on the behavior of nodes in sensing and forwarding, the behavior trust is obtained. The initial value of historical trust is set to the maximum and updated with comprehensive trust. Comprehensive trust can be obtained by weighted calculation, and then the model is used to construct the trust list and guide the process of data fusion. Using the trust model, simulation results indicate that energy consumption can be reduced by an average of 15%. The detection rate of abnormal nodes is at least 10% higher than that of the lightweight and dependable trust system (LDTS) model. Therefore, this model has good performance in ensuring the reliability and credibility of the data. Moreover, the energy consumption of transmitting was greatly reduced. PMID:28350347

  16. Enhanced damage localization for complex structures through statistical modeling and sensor fusion

    NASA Astrophysics Data System (ADS)

    Haynes, Colin; Todd, Michael

    2015-03-01

    Ultrasonic guided waves represent a promising technique for detecting and localizing structural damage, but their application to realistic structures has been hampered by the complicated interference patterns produced by scattering from geometric features. This work presents a new damage localization paradigm based on a statistical approach to dealing with uncertainty in the guided wave signals. A bolted frame and a section of a fuselage rib are tested with different simulated damage conditions and used to conduct a detailed comparison between the proposed solution and other sparse-array localization approaches. After establishing the superiority of the statistical approach, two novel innovations to the localization procedure are proposed: an approach to sensor fusion based on the Neyman-Pearson criterion, and a method of constructing simple models of geometrical features. Including the sensor fusion and geometrical models produces a substantial improvement in the system's localization accuracy. The final result is a robust and accurate framework for single-site damage localization that moves structural health monitoring towards practical implementation on a much broader range of structures.

  17. Trust Model of Wireless Sensor Networks and Its Application in Data Fusion.

    PubMed

    Chen, Zhenguo; Tian, Liqin; Lin, Chuang

    2017-03-28

    In order to ensure the reliability and credibility of the data in wireless sensor networks (WSNs), this paper proposes a trust evaluation model and data fusion mechanism based on trust. First of all, it gives the model structure. Then, the calculation rules of trust are given. In the trust evaluation model, comprehensive trust consists of three parts: behavior trust, data trust, and historical trust. Data trust can be calculated by processing the sensor data. Based on the behavior of nodes in sensing and forwarding, the behavior trust is obtained. The initial value of historical trust is set to the maximum and updated with comprehensive trust. Comprehensive trust can be obtained by weighted calculation, and then the model is used to construct the trust list and guide the process of data fusion. Using the trust model, simulation results indicate that energy consumption can be reduced by an average of 15%. The detection rate of abnormal nodes is at least 10% higher than that of the lightweight and dependable trust system (LDTS) model. Therefore, this model has good performance in ensuring the reliability and credibility of the data. Moreover, the energy consumption of transmitting was greatly reduced.

  18. Gyro Drift Correction for An Indirect Kalman Filter Based Sensor Fusion Driver.

    PubMed

    Lee, Chan-Gun; Dao, Nhu-Ngoc; Jang, Seonmin; Kim, Deokhwan; Kim, Yonghun; Cho, Sungrae

    2016-06-11

    Sensor fusion techniques have made a significant contribution to the success of the recently emerging mobile applications era because a variety of mobile applications operate based on multi-sensing information from the surrounding environment, such as navigation systems, fitness trackers, interactive virtual reality games, etc. For these applications, the accuracy of sensing information plays an important role to improve the user experience (UX) quality, especially with gyroscopes and accelerometers. Therefore, in this paper, we proposed a novel mechanism to resolve the gyro drift problem, which negatively affects the accuracy of orientation computations in the indirect Kalman filter based sensor fusion. Our mechanism focuses on addressing the issues of external feedback loops and non-gyro error elements contained in the state vectors of an indirect Kalman filter. Moreover, the mechanism is implemented in the device-driver layer, providing lower process latency and transparency capabilities for the upper applications. These advances are relevant to millions of legacy applications since utilizing our mechanism does not require the existing applications to be re-programmed. The experimental results show that the root mean square errors (RMSE) before and after applying our mechanism are significantly reduced from 6.3 × 10(-1) to 5.3 × 10(-7), respectively.

  19. Gyro Drift Correction for An Indirect Kalman Filter Based Sensor Fusion Driver

    PubMed Central

    Lee, Chan-Gun; Dao, Nhu-Ngoc; Jang, Seonmin; Kim, Deokhwan; Kim, Yonghun; Cho, Sungrae

    2016-01-01

    Sensor fusion techniques have made a significant contribution to the success of the recently emerging mobile applications era because a variety of mobile applications operate based on multi-sensing information from the surrounding environment, such as navigation systems, fitness trackers, interactive virtual reality games, etc. For these applications, the accuracy of sensing information plays an important role to improve the user experience (UX) quality, especially with gyroscopes and accelerometers. Therefore, in this paper, we proposed a novel mechanism to resolve the gyro drift problem, which negatively affects the accuracy of orientation computations in the indirect Kalman filter based sensor fusion. Our mechanism focuses on addressing the issues of external feedback loops and non-gyro error elements contained in the state vectors of an indirect Kalman filter. Moreover, the mechanism is implemented in the device-driver layer, providing lower process latency and transparency capabilities for the upper applications. These advances are relevant to millions of legacy applications since utilizing our mechanism does not require the existing applications to be re-programmed. The experimental results show that the root mean square errors (RMSE) before and after applying our mechanism are significantly reduced from 6.3×10-1 to 5.3×10-7, respectively. PMID:27294941

  20. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  1. Optimization in nanocoated D-shaped optical fiber sensors.

    PubMed

    Del Villar, Ignacio; Zubiate, Pablo; Zamarreño, Carlos R; Arregui, Francisco J; Matias, Ignacio R

    2017-05-15

    Nanocoated D-shaped optical fibers have been proven as effective sensors. Here, we show that the full width at half minimum (FWHM) of lossy mode resonance can be reduced by optimizing the nanocoating width, thickness and refractive index. As a counterpart, several resonances are observed in the optical spectrum for specific conditions. These resonances are caused by multiple modes guided in the nanocoating. By optimizing the width of the coating and the imaginary part of its refractive index, it is possible to isolate one of these resonances, which allows one to reduce the full width at half minimum of the device and, hence, to increase the figure of merit. Moreover, it is even possible to avoid the need of a polarizer by designing a device where the resonance bands for TE and TM polarization are centered at the same wavelength. This is interesting for the development of optical filters and sensors with a high figure of merit.

  2. Optimization of parasitic isolators in laser fusion systems

    SciTech Connect

    Figueira, J.F.; Phipps, C.R. Jr.

    1980-01-01

    The results of model calculations for the optimization of the efficiency of high-gain amplifier systems stabilized by saturable absorbers are described. It is shown that the isolator performance can be characterized by a convenient figure of merit.

  3. Optimal sensor placement for parameter estimation of bridges

    NASA Astrophysics Data System (ADS)

    Eskew, Edward; Jang, Shinae

    2017-04-01

    Gathering measurements from a structure can be extremely valuable for tasks such as verifying a numerical model, or structural health monitoring (SHM) to identify changes in the natural frequencies and mode shapes which can be attributed to changes in the system. In most monitoring applications, the number of potential degrees-of-freedom (DOF) for monitoring greatly outnumbers the available sensors. Optimal sensor placement (OSP) is a field of research into different methods for locating the available sensors to gather the optimal measurements. Three common methods of OSP are the effective independence (EI), effective independence driving point residue (EI-DPR), and modal kinetic energy (MKE) methods. However, comparisons of the different OSP methods for SHM applications are limited. In this paper, a comparison of the performance of the three described OSP methods for parameter estimation is performed. Parameter estimation is implemented using modified parameter localization with direct model updating, and added mass quantification utilizing a genetic algorithm (GA). The quantification of the mass addition, using simulated measurements from the sensor networks developed by each OSP method, is compared to provide an evaluation of each OSP methods capability for parameter estimation applications.

  4. Coordinated Optimization of Aircraft Routes and Locations of Ground Sensors

    DTIC Science & Technology

    2014-09-17

    optimization; and, furthermore, the objectives and constraints are specified by either a mission planner or calculated using separate soft- ware. In our...the calculations are not trivial. ERDC/CRREL TR-14-20 9 3.1 Coverage matrix for aircraft Coverage matrix A enters the inequality coverage...independent), ( ; ,..., ) ( ; ,..., )md q Mq d q MqP P r s s r s s1 1 1 for any q. To calculate the joint probabilities, sensor information can

  5. Analysis of LPFG sensor systems for aircraft wing drag optimization

    NASA Astrophysics Data System (ADS)

    Kazemi, Alex A.; Ishihara, Abe

    2014-09-01

    In normal fiber, the refractive indices of the core and cladding do not change along the length of the fiber; however, by inducing a periodic modulation of refractive index along the length in the core of the optical fiber, the optical fiber grating is produced. This exhibits very interesting spectral properties and for this reason we propose to develop and integrate a distributed sensor network based on long period fiber gratings (LPFGs) technology which has grating periods on the order of 100 μm to 1 mm to be embedded in the wing section of aircraft to measure bending and torsion in real-time in order to measure wing deformation of commercial airplanes resulting in extensive benefits such as reduced structural weight, mitigation of induced drag and lower fuel consumption which is fifty percent of total cost of operation for airline industry. Fiber optic sensors measurement capabilities are as vital as they are for other sensing technologies, but optical measurements differ in important ways. In this paper we focus on the testing and aviation requirements for LPFG sensors. We discuss the bases of aviation standards for fiber optic sensor measurements, and the quantities that are measured. Our main objective is to optimize the design for material, mechanical, optical and environmental requirements. We discuss the analysis and evaluation of extensive testing of LPFG sensor systems such as attenuation, environmental, humidity, fluid immersion, temperature cycling, aging, smoke, flammability, impact resistance, flexure endurance, tensile, vitiation and shock.

  6. Optimal sensor placement using FRFs-based clustering method

    NASA Astrophysics Data System (ADS)

    Li, Shiqi; Zhang, Heng; Liu, Shiping; Zhang, Zhe

    2016-12-01

    The purpose of this work is to develop an optimal sensor placement method by selecting the most relevant degrees of freedom as actual measure position. Based on observation matrix of a structure's frequency response, two optimal criteria are used to avoid the information redundancy of the candidate degrees of freedom. By using principal component analysis, the frequency response matrix can be decomposed into principal directions and their corresponding singular. A relatively small number of principal directions will maintain a system's dominant response information. According to the dynamic similarity of each degree of freedom, the k-means clustering algorithm is designed to classify the degrees of freedom, and effective independence method deletes the sensors which are redundant of each cluster. Finally, two numerical examples and a modal test are included to demonstrate the efficient of the derived method. It is shown that the proposed method provides a way to extract sub-optimal sets and the selected sensors are well distributed on the whole structure.

  7. Registration and fusion of multi-sensor data using multiple agents

    NASA Astrophysics Data System (ADS)

    Tait, Roger J.; Hopgood, Adrian A.; Schaefer, Gerald

    2005-06-01

    Non-destructive evaluation is widely used in the manufacturing industry for the detection and characterisation of defects. Typical techniques include visual, magnetic particle, fluorescent dye penetrant, ultrasonic, and eddy current inspection. This paper presents a multi-agent approach to combining image data such as these for quality control. The use of distributed agents allows the speed benefits of parallel processing to be realised, facilitating increased levels of detection through the use of high resolution images. The integration of multi-sensor devices and the fusion of their multi-modal outputs has the potential to provide an increased level of certainty in defect detection and identification. It may also allow the detection and identification of defects that cannot be detected by an individual sensor. This would reduce uncertainty and provide a more complete picture of aesthetic and structural integrity than is possible from a single data source. A blackboard architecture, DARBS (Distributed Algorithmic and Rule-based Blackboard System), has been used to manage the processing and interpretation of image data. Rules and image processing routines are allocated to intelligent agents that communicate with each other via the blackboard, where the current understanding of the problem evolves. Specialist agents register image segments into a common coordinate system. An intensity-based algorithm eliminates the landmark extraction that would be required by feature-based registration techniques. Once registered, pixel-level data fusion is utilised so that both complementary and redundant data can be exploited. The modular nature of the blackboard architecture allows additional sensor data to be processed by the addition or removal of specialised agents.

  8. Temporal Pattern Recognition: A Network Architecture For Multi-Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Priebe, C. E.; Marchette, D. J.

    1989-03-01

    A self-organizing network architecture for the learning and recognition of temporal patterns is proposed. This multi-layered architecture has as its focal point a layer of multi-dimensional Gaussian classification nodes, and the learning scheme employed is based on standard statistical moving mean and moving covariance calculations. The nodes are implemented in the network architecture by using a Gaussian, rather than sigmoidal, transfer function acting on the input from numerous connections. Each connection is analogous to a separate dimension for the Gaussian function. The learning scheme is a one-pass method, eliminating the need for repetitive presentation of the teaching stimuli. The Gaussian classes developed are representative of the statistics of the teaching data and act as templates in classifying novel inputs. The input layer employs a time-based decay to develop a time-ordered representation of the input stimuli. This temporal pattern recognition architecture is used to perform multi-sensor fusion and scene analysis for ROBART II, an autonomous sentry robot employing heterogeneous and homogeneous binary (on / off) sensors. The system receives sensor packets from ROBART indicating which sensors are active. The packets from various sensors are integrated in the input layer. As time progresses these sensor outputs become ordered, allowing the system to recognize activities which are dependent, not only on the individual events which make up the activity, but also on the order in which these events occur and their relative spacing throughout time. Each Gaussian classification node, representing a learned activity as an ordered sequence of sensor outputs, calculates its activation value independently, based on the activity in the input layer. These Gaussian activation values are then used to determine which, if any, of the learned sequences are present and with what confidence. The classification system is capable of recognizing activities despite missing

  9. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    PubMed Central

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-01-01

    This paper presents a sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction. The proposed system is designed to capture data on a fast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor, and they are synchronized by a hardware trigger. Reconstruction of 3D structures is done by estimating frame-by-frame motion and accumulating vertical laser scans, as in previous works. However, our approach does not assume near 2D motion, but estimates free motion (including absolute scale) in 3D space using both laser data and image features. In order to avoid the degeneration associated with typical three-point algorithms, we present a new algorithm that selects 3D points from two frames captured by multiple cameras. The problem of error accumulation is solved by loop closing, not by GPS. The experimental results show that the estimated path is successfully overlaid on the satellite images, such that the reconstruction result is very accurate. PMID:25375758

  10. Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation.

    PubMed

    Yu, Ningbo; Xu, Chang; Li, Huanshuai; Wang, Kui; Wang, Liancheng; Liu, Jingtai

    2016-03-18

    Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact that both arms are needed in common activities of daily living (ADLs), and can promote good functional recovery. In this work, the fusion of a gesture sensor and a haptic sensor with force feedback capabilities has enabled a bilateral rehabilitation training therapy. The Leap Motion gesture sensor detects the motion of the healthy hand, and the omega.7 device can detect and assist the paretic hand, according to the designed cooperative task paradigm, as much as needed, with active force feedback to accomplish the manipulation task. A virtual scenario has been built up, and the motion and force data facilitate instantaneous visual and audio feedback, as well as further analysis of the functional capabilities of the patient. This task-oriented bimanual training paradigm recruits the sensory, motor and cognitive aspects of the patient into one loop, encourages the active involvement of the patients into rehabilitation training, strengthens the cooperation of both the healthy and impaired hands, challenges the dexterous manipulation capability of the paretic hand, suits easy of use at home or centralized institutions and, thus, promises effective potentials for rehabilitation training.

  11. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    NASA Astrophysics Data System (ADS)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  12. Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.

    PubMed

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-10-21

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.

  13. Analysis of monitoring data from cable-stayed bridge using sensor fusion techniques

    NASA Astrophysics Data System (ADS)

    Zonta, Daniele; Bruschetta, Federico; Zandonini, Riccardo; Pozzi, Matteo; Wang, Ming; Zhao, Yang; Inaudi, Daniele; Posenato, Daniele; Glisic, Branko

    2013-04-01

    This paper illustrates an application of Bayesian logic to monitoring data analysis and structural condition state inference. The case study is a 260 m long cable-stayed bridge spanning the Adige River 10 km north of the town of Trento, Italy. This is a statically indeterminate structure, having a composite steel-concrete deck, supported by 12 stay cables. Structural redundancy, possible relaxation losses and an as-built condition differing from design, suggest that long-term load redistribution between cables can be expected. To monitor load redistribution, the owner decided to install a monitoring system which combines built-on-site elasto-magnetic and fiber-optic sensors. In this note, we discuss a rational way to improve the accuracy of the load estimate from the EM sensors taking advantage of the FOS information. More specifically, we use a multi-sensor Bayesian data fusion approach which combines the information from the two sensing systems with the prior knowledge, including design information and the outcomes of laboratory calibration. Using the data acquired to date, we demonstrate that combining the two measurements allows a more accurate estimate of the cable load, to better than 50 kN.

  14. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots

    PubMed Central

    Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro

    2013-01-01

    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933

  15. Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation

    PubMed Central

    Yu, Ningbo; Xu, Chang; Li, Huanshuai; Wang, Kui; Wang, Liancheng; Liu, Jingtai

    2016-01-01

    Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact that both arms are needed in common activities of daily living (ADLs), and can promote good functional recovery. In this work, the fusion of a gesture sensor and a haptic sensor with force feedback capabilities has enabled a bilateral rehabilitation training therapy. The Leap Motion gesture sensor detects the motion of the healthy hand, and the omega.7 device can detect and assist the paretic hand, according to the designed cooperative task paradigm, as much as needed, with active force feedback to accomplish the manipulation task. A virtual scenario has been built up, and the motion and force data facilitate instantaneous visual and audio feedback, as well as further analysis of the functional capabilities of the patient. This task-oriented bimanual training paradigm recruits the sensory, motor and cognitive aspects of the patient into one loop, encourages the active involvement of the patients into rehabilitation training, strengthens the cooperation of both the healthy and impaired hands, challenges the dexterous manipulation capability of the paretic hand, suits easy of use at home or centralized institutions and, thus, promises effective potentials for rehabilitation training. PMID:26999149

  16. The Optimized Block-Regression Fusion Algorithm for Pansharpening of Very High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, J. X.; Yang, J. H.; Reinartz, P.

    2016-06-01

    Pan-sharpening of very high resolution remotely sensed imagery need enhancing spatial details while preserving spectral characteristics, and adjusting the sharpened results to realize the different emphases between the two abilities. In order to meet the requirements, this paper is aimed at providing an innovative solution. The block-regression-based algorithm (BR), which was previously presented for fusion of SAR and optical imagery, is firstly applied to sharpen the very high resolution satellite imagery, and the important parameter for adjustment of fusion result, i.e., block size, is optimized according to the two experiments for Worldview-2 and QuickBird datasets in which the optimal block size is selected through the quantitative comparison of the fusion results of different block sizes. Compared to five fusion algorithms (i.e., PC, CN, AWT, Ehlers, BDF) in fusion effects by means of quantitative analysis, BR is reliable for different data sources and can maximize enhancement of spatial details at the expense of a minimum spectral distortion.

  17. Optimizing probe design for an implantable perfusion and oxygenation sensor

    SciTech Connect

    Akl, Tony; Long, Ruiqi; McShane, Michael J.; Ericson, Milton Nance; Wilson, Mark A.; Cote, Gerard L.

    2011-01-01

    In an effort to develop an implantable optical perfusion and oxygenation sensor, based on multiwavelength reflectance pulse oximetry, we investigate the effect of source detector separation and other source-detector characteristics to optimize the sensor s signal to background ratio using Monte Carlo (MC) based simulations and in vitro phantom studies. Separations in the range 0.45 to 1.25 mm were found to be optimal in the case of a point source. The numerical aperture (NA) of the source had no effect on the collected signal while the widening of the source spatial profile caused a shift in the optimal source-detector separation. Specifically, for a 4.5 mm flat beam and a 2.4 mm 2.5 mm photodetector, the optimal performance was found to be when the source and detector are adjacent to each other. These modeling results were confirmed by data collected from in vitro experiments on a liver phantom perfused with dye solutions mimicking the absorption properties of hemoglobin for different oxygenation states.

  18. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    NASA Astrophysics Data System (ADS)

    Olivares, A.; Górriz, J. M.; Ramírez, J.; Olivares, G.

    2011-02-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed.

  19. Quantitative characterization of pulverized coal and biomass-coal blends in pneumatic conveying pipelines using electrostatic sensor arrays and data fusion techniques

    NASA Astrophysics Data System (ADS)

    Qian, Xiangchen; Yan, Yong; Shao, Jiaqing; Wang, Lijuan; Zhou, Hao; Wang, Chao

    2012-08-01

    Quantitative data about the dynamic behaviour of pulverized coal and biomass-coal blends in fuel injection pipelines allow power plant operators to detect variations in fuel supply and oscillations in the flow at an early stage, enable them to balance fuel distribution between fuel feeding pipes and ultimately to achieve higher combustion efficiency and lower greenhouse gas emissions. Electrostatic sensor arrays and data fusion algorithms are combined to provide a non-intrusive solution to the measurement of fuel particle velocity, relative solid concentration and flow stability under pneumatic conveying conditions. Electrostatic sensor arrays with circular and arc-shaped electrodes are integrated in the same sensing head to measure ‘averaged’ and ‘localized’ characteristics of pulverized fuel flow. Data fusion techniques are applied to optimize and integrate the results from the sensor arrays. Experimental tests were conducted on the horizontal section of a 150 mm bore pneumatic conveyor circulating pulverized coal and sawdust under various flow conditions. Test results suggest that pure coal particles travel faster and carry more electrostatic charge than biomass-coal blends. As more biomass particles are added to the flow, the overall velocity of the flow reduces, the electrostatic charge level on particles decreases and the flow becomes less stable compared to the pure coal flow.

  20. Classification of paddy rice through multi-temporal multi-sensor data fusion

    NASA Astrophysics Data System (ADS)

    Im, Jungho; Park, Seonyoung

    2017-04-01

    Rice is one of important food resources in the world and its consumption continues to increase with increasing world population. Accurate paddy rice mapping and monitoring are crucial for food security and agricultural mitigation because they enable us to forecast rice production. There have been studies for paddy rice classification using optical sensor data. However, optical sensor data has a limitation for data acquisition due to cloud contamination. Active Synthetic Aperture Radar (SAR) data have been used to complement the cloud problems of optical sensor images. Integration of the multispectral and SAR data can produce the more reliable crop classification results than from a single sensor data. In addition, as paddy rice has distinct phenology, many studies used phenology features from multi-temporal data for detecting paddy rice. Thus, this study aims at mapping paddy rice by expanding the spectral and temporal dimensions of data. In this study, we conducted paddy rice classification through fusion of multi-temporal optical sensor (Landsat) and SAR (RADARSAT-1 and ALSO PALSAR) data using two machine learning approaches—random forest (RF) and support vector machines (SVM) over two study sites (Dangjin-si in South Korea and Sutter County, California in the United States). This study examined six scenarios to identify the effect of the expansion of data dimension. Each scenario has a different combination of data sources and seasonal characteristics. We examined variable importance to identify which sensor data collected at which season are important to classify paddy rice. In addition, this study proposed a new index called Paddy rice Mapping Index (PMI) for effective paddy rice classification considering the spectral and temporal characteristics of paddy rice. Scenario 6 that uses optical sensor and SAR multi temporal data showed the highest overall accuracy (site 1: 98.67%; site 2: 93.87%) for paddy rice classification among six scenarios. Both machine

  1. Exploiting node mobility for energy optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    El-Moukaddem, Fatme Mohammad

    Wireless Sensor Networks (WSNs) have become increasingly available for data-intensive applications such as micro-climate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit the sheer amount of data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies such as batteries or small solar panels. The availability of numerous low-cost robotic units (e.g. Robomote and Khepera) has made it possible to construct sensor networks consisting of mobile sensor nodes. It has been shown that the controlled mobility offered by mobile sensors can be exploited to improve the energy efficiency of a network. In this thesis, we propose schemes that use mobile sensor nodes to reduce the energy consumption of data-intensive WSNs. Our approaches differ from previous work in two main aspects. First, our approaches do not require complex motion planning of mobile nodes, and hence can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless communications into a holistic optimization framework. We consider three problems arising from the limited energy in the sensor nodes. In the first problem, the network consists of mostly static nodes and contains only a few mobile nodes. In the second and third problems, we assume essentially that all nodes in the WSN are mobile. We first study a new problem called max-data mobile relay configuration (MMRC ) that finds the positions of a set of mobile sensors, referred to as relays, that maximize the total amount of data gathered by the network during its lifetime. We show that the MMRC problem is surprisingly complex even for a trivial network topology due to the joint consideration of the energy consumption of both wireless communication and mechanical locomotion. We present optimal MMRC algorithms and practical distributed

  2. Geometric calibration of multi-sensor image fusion system with thermal infrared and low-light camera

    NASA Astrophysics Data System (ADS)

    Peric, Dragana; Lukic, Vojislav; Spanovic, Milana; Sekulic, Radmila; Kocic, Jelena

    2014-10-01

    A calibration platform for geometric calibration of multi-sensor image fusion system is presented in this paper. The accurate geometric calibration of the extrinsic geometric parameters of cameras that uses planar calibration pattern is applied. For calibration procedure specific software is made. Patterns used in geometric calibration are prepared with aim to obtain maximum contrast in both visible and infrared spectral range - using chessboards which fields are made of different emissivity materials. Experiments were held in both indoor and outdoor scenarios. Important results of geometric calibration for multi-sensor image fusion system are extrinsic parameters in form of homography matrices used for homography transformation of the object plane to the image plane. For each camera a corresponding homography matrix is calculated. These matrices can be used for image registration of images from thermal and low light camera. We implemented such image registration algorithm to confirm accuracy of geometric calibration procedure in multi-sensor image fusion system. Results are given for selected patterns - chessboard with fields made of different emissivity materials. For the final image registration algorithm in surveillance system for object tracking we have chosen multi-resolution image registration algorithm which naturally combines with a pyramidal fusion scheme. The image pyramids which are generated at each time step of image registration algorithm may be reused at the fusion stage so that overall number of calculations that must be performed is greatly reduced.

  3. Infrared and visible image fusion based on visual saliency map and weighted least square optimization

    NASA Astrophysics Data System (ADS)

    Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua

    2017-05-01

    The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional ;averaging; fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.

  4. Discrete-Time ARMAv Model-Based Optimal Sensor Placement

    SciTech Connect

    Song Wei; Dyke, Shirley J.

    2008-07-08

    This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.

  5. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements

    PubMed Central

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-01-01

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject’s movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation. PMID:27854288

  6. Model-based optimal design of polymer-coated chemical sensors.

    PubMed

    Phillips, Cynthia; Jakusch, Michael; Steiner, Hannes; Mizaikoff, Boris; Fedorov, Andrei G

    2003-03-01

    A model-based methodology for optimal design of polymer-coated chemical sensors is developed and is illustrated for the example of infrared evanescent field chemical sensors. The methodology is based on rigorous and computationally efficient modeling of combined fluid mechanics and mass transfer, including transport of multiple analytes. A simple algebraic equation for the optimal size of the sensor flow cell is developed to guide sensor design and validated by extensive CFD simulations. Based upon these calculations, optimized geometries of the sensor flow cell are proposed to further improve the response time of chemical sensors.

  7. A data fusion algorithm for multi-sensor microburst hazard assessment

    NASA Technical Reports Server (NTRS)

    Wanke, Craig R.; Hansman, R. John

    1994-01-01

    A recursive model-based data fusion algorithm for multi-sensor microburst hazard assessment is described. An analytical microburst model is used to approximate the actual windfield, and a set of 'best' model parameters are estimated from measured winds. The winds corresponding to the best parameter set can then be used to compute alerting factors such as microburst position, extent, and intensity. The estimation algorithm is based on an iterated extended Kalman filter which uses the microburst model parameters as state variables. Microburst state dynamic and process noise parameters are chosen based on measured microburst statistics. The estimation method is applied to data from a time-varying computational simulation of a historical microburst event to demonstrate its capabilities and limitations. Selection of filter parameters and initial conditions is discussed. Computational requirements and datalink bandwidth considerations are also addressed.

  8. Context-Aware Personal Navigation Using Embedded Sensor Fusion in Smartphones

    PubMed Central

    Saeedi, Sara; Moussa, Adel; El-Sheimy, Naser

    2014-01-01

    Context-awareness is an interesting topic in mobile navigation scenarios where the context of the application is highly dynamic. Using context-aware computing, navigation services consider the situation of user, not only in the design process, but in real time while the device is in use. The basic idea is that mobile navigation services can provide different services based on different contexts—where contexts are related to the user's activity and the device placement. Context-aware systems are concerned with the following challenges which are addressed in this paper: context acquisition, context understanding, and context-aware application adaptation. The proposed approach in this paper is using low-cost sensors in a multi-level fusion scheme to improve the accuracy and robustness of context-aware navigation system. The experimental results demonstrate the capabilities of the context-aware Personal Navigation Systems (PNS) for outdoor personal navigation using a smartphone. PMID:24670715

  9. Robustness properties of LQG optimized compensators for collocated rate sensors

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1994-01-01

    In this paper we study the robustness with respect to stability of the closed-loop system with collocated rate sensor using LQG (mean square rate) optimized compensators. Our main result is that the transmission zeros of the compensator are precisely the structure modes when the actuator/sensor locations are 'pinned' and/or 'clamped': i.e., motion in the direction sensed is not allowed. We have stability even under parameter mismatch, except in the unlikely situation where such a mode frequency of the assumed system coincides with an undamped mode frequency of the real system and the corresponding mode shape is an eigenvector of the compensator transfer function matrix at that frequency. For a truncated modal model - such as that of the NASA LaRC Phase Zero Evolutionary model - the transmission zeros of the corresponding compensator transfer function can be interpreted as the structure modes when motion in the directions sensed is prohibited.

  10. Optimal feedback control of a bioreactor with a remote sensor

    NASA Technical Reports Server (NTRS)

    Niranjan, S. C.; San, K. Y.

    1988-01-01

    Sensors used to monitor bioreactor conditions directly often perform poorly in the face of adverse nonphysiological conditions. One way to circumvent this is to use a remote sensor block. However, such a configuration usually causes a significant time lag between measurements and the actual state values. Here, the problem of implementing feedback control strategies for such systems, described by nonlinear equations, is addressed. The problem is posed as an optimal control problem with a linear quadratic performance index. The linear control law so obtained is used to implement feedback. A global linearization technique as well as an expansion using Taylor series is used to linearize the nonlinear system, and the feedback is subsequently implemented.

  11. Optimal feedback control of a bioreactor with a remote sensor

    NASA Technical Reports Server (NTRS)

    Niranjan, S. C.; San, K. Y.

    1988-01-01

    Sensors used to monitor bioreactor conditions directly often perform poorly in the face of adverse nonphysiological conditions. One way to circumvent this is to use a remote sensor block. However, such a configuration usually causes a significant time lag between measurements and the actual state values. Here, the problem of implementing feedback control strategies for such systems, described by nonlinear equations, is addressed. The problem is posed as an optimal control problem with a linear quadratic performance index. The linear control law so obtained is used to implement feedback. A global linearization technique as well as an expansion using Taylor series is used to linearize the nonlinear system, and the feedback is subsequently implemented.

  12. Real-time classification and sensor fusion with a spiking deep belief network

    PubMed Central

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input. PMID:24115919

  13. ADS-B and multilateration sensor fusion algorithm for air traffic control

    NASA Astrophysics Data System (ADS)

    Liang, Mengchen

    Air traffic is expected to increase rapidly in the next decade. But, the current Air Traffic Control (ATC) system does not meet the demand of the future safety and efficiency. The Next Generation Air Transportation System (NextGen) is a transformation program for the ATC system in the United States. The latest estimates by Federal Aviation Administration (FAA) show that by 2018 NextGen will reduce total delays in flight by 35 percent and provide 23 billion dollars in cumulative benefits. A satellite-based technology called the Automatic Dependent Surveillance-Broadcast (ADS-B) system is one of the most important elements in NextGen. FAA expects that ADS-B systems will be available in the National Airspace System (NAS) by 2020. However, an alternative surveillance system is needed due to vulnerabilities that exist in ADS-B systems. Multilateration has a high accuracy performance and is believed to be an ideal back-up strategy for ADS-B systems. Thus, in this study, we develop the ADS-B and multilateration sensor fusion algorithm for aircraft tracking applications in ATC. The algorithm contains a fault detection function for ADS-B information monitoring by using Trajectory Change Points reports from ADS-B and numerical vectors from a hybrid estimation algorithm. We consider two types of faults in the ADS-B measurement model to show that the algorithm is able to deal with the bad data from ADS-B systems and automatically select good data from multilateration systems. We apply fuzzy logic concepts and generate time variant parameters during the fusion process. The parameters play a role of weights for combining data from different sensors. The algorithm performance is validated through two aircraft tracking examples.

  14. Real-time classification and sensor fusion with a spiking deep belief network.

    PubMed

    O'Connor, Peter; Neil, Daniel; Liu, Shih-Chii; Delbruck, Tobi; Pfeiffer, Michael

    2013-01-01

    Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.

  15. Multiple sensor detection of process phenomena in laser powder bed fusion

    NASA Astrophysics Data System (ADS)

    Lane, Brandon; Whitenton, Eric; Moylan, Shawn

    2016-05-01

    Laser powder bed fusion (LPBF) is an additive manufacturing (AM) process in which a high power laser melts metal powder layers into complex, three-dimensional shapes. LPBF parts are known to exhibit relatively high residual stresses, anisotropic microstructure, and a variety of defects. To mitigate these issues, in-situ measurements of the melt-pool phenomena may illustrate relationships between part quality and process signatures. However, phenomena such as spatter, plume formation, laser modulation, and melt-pool oscillations may require data acquisition rates exceeding 10 kHz. This hinders use of relatively data-intensive, streaming imaging sensors in a real-time monitoring and feedback control system. Single-point sensors such as photodiodes provide the temporal bandwidth to capture process signatures, while providing little spatial information. This paper presents results from experiments conducted on a commercial LPBF machine which incorporated synchronized, in-situ acquisition of a thermal camera, high-speed visible camera, photodiode, and laser modulation signal during fabrication of a nickel alloy 625 AM part with an overhang geometry. Data from the thermal camera provides temperature information, the visible camera provides observation of spatter, and the photodiode signal provides high temporal bandwidth relative brightness stemming from the melt pool region. In addition, joint-time frequency analysis (JTFA) was performed on the photodiode signal. JTFA results indicate what digital filtering and signal processing are required to highlight particular signatures. Image fusion of the synchronized data obtained over multiple build layers allows visual comparison between the photodiode signal and relating phenomena observed in the imaging detectors.

  16. Spatial Aspects of Multi-Sensor Data Fusion: Aerosol Optical Thickness

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory; Zubko, V.; Gopalan, A.

    2007-01-01

    The Goddard Earth Sciences Data and Information Services Center (GES DISC) investigated the applicability and limitations of combining multi-sensor data through data fusion, to increase the usefulness of the multitude of NASA remote sensing data sets, and as part of a larger effort to integrate this capability in the GES-DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni). This initial study focused on merging daily mean Aerosol Optical Thickness (AOT), as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites, to increase spatial coverage and produce complete fields to facilitate comparison with models and station data. The fusion algorithm used the maximum likelihood technique to merge the pixel values where available. The algorithm was applied to two regional AOT subsets (with mostly regular and irregular gaps, respectively) and a set of AOT fields that differed only in the size and location of artificially created gaps. The Cumulative Semivariogram (CSV) was found to be sensitive to the spatial distribution of gap areas and, thus, useful for assessing the sensitivity of the fused data to spatial gaps.

  17. Fusion and optimized Gabor filter design for object detection

    NASA Astrophysics Data System (ADS)

    Weber, David; Casasent, David P.

    1995-10-01

    We consider the problem of detection of objects in images using one filter based on different 2D Gabor functions. By detection, we mean locating multiple classes of targets with distortions present and in a clutter background. It is also desirable to minimize false alarms due to clutter. Gabor functions (GFs) are Gaussian functions modulated by complex sinusoids. The imaginary (real) part of a GF has been shown to be a good edge (blob) detector. In this work, we use a single filter which is a linear combination of the real and imaginary parts of several GFs. We refer to this as a macro Gabor filter. It is correlated with an input image and then thresholded to detect targets. The new aspects are: combining real and imaginary parts of GFs into one filter, separately optimizing the parameters of the GFs by controlling the shape of the correlation outputs for true classes and clutter and separately optimizing the linear combination coefficients using a new square law perceptron to detect hot and cold objects. We show multi-class distortion invariant detection results with better performance than obtained with other methods.

  18. Optimization of compact stellarator configuration as fusion devices

    SciTech Connect

    Najmabadi, Farrokh; Rene Raffray, A.; Ku, Long-Poe; Lyon, James F.

    2006-05-15

    Optimization of the stellarator configuration requires tradeoffs among a large number of physics parameters and engineering constraints. An integrated study of compact stellarator power plants, ARIES-CS, aims at examining these tradeoffs and defining key R and D areas. Configurations with a plasma aspect ratio of A{<=}6 and excellent quasiaxisymmetry (QA) in both two and three field period versions were developed while reducing {alpha}-particle losses to <10%. Stability to linear ideal MHD modes was attained, but at the expense of reduced QA (and increased {alpha}-particle losses) and increased complexity of the plasma shape. Recent experimental results indicate, however, that linear MHD stability limits may not be applicable to stellarators. By utilizing a highly efficient shield-only region in strategic areas, the minimum standoff was reduced by {approx}30%. This allows a comparable reduction in the machine size. The device configuration, assembly, and maintenance procedures appear to impose severe constraints: three distinct approaches were developed, each applicable to a certain blanket concept and/or stellarator configuration. Modular coils are designed to examine the geometric complexity and to understand the constraints imposed by the maximum allowable field, desirable coil-plasma separation, coil-coil spacing, and other coil parameters. A cost-optimization system code has also been developed and will be utilized to assess the tradeoff among physics and engineering constraints in a self-consistent manner in the final phase of the ARIES-CS study.

  19. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  20. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  1. Passive in-vehicle driver breath alcohol detection using advanced sensor signal acquisition and fusion.

    PubMed

    Ljungblad, Jonas; Hök, Bertil; Allalou, Amin; Pettersson, Håkan

    2017-05-29

    The research objective of the present investigation is to demonstrate the present status of passive in-vehicle driver breath alcohol detection and highlight the necessary conditions for large-scale implementation of such a system. Completely passive detection has remained a challenge mainly because of the requirements on signal resolution combined with the constraints of vehicle integration. The work is part of the Driver Alcohol Detection System for Safety (DADSS) program aiming at massive deployment of alcohol sensing systems that could potentially save thousands of American lives annually. The work reported here builds on earlier investigations, in which it has been shown that detection of alcohol vapor in the proximity of a human subject may be traced to that subject by means of simultaneous recording of carbon dioxide (CO2) at the same location. Sensors based on infrared spectroscopy were developed to detect and quantify low concentrations of alcohol and CO2. In the present investigation, alcohol and CO2 were recorded at various locations in a vehicle cabin while human subjects were performing normal in-step procedures and driving preparations. A video camera directed to the driver position was recording images of the driver's upper body parts, including the face, and the images were analyzed with respect to features of significance to the breathing behavior and breath detection, such as mouth opening and head direction. Improvement of the sensor system with respect to signal resolution including algorithm and software development, and fusion of the sensor and camera signals was successfully implemented and tested before starting the human study. In addition, experimental tests and simulations were performed with the purpose of connecting human subject data with repeatable experimental conditions. The results include occurrence statistics of detected breaths by signal peaks of CO2 and alcohol. From the statistical data, the accuracy of breath alcohol

  2. The application of machine learning in multi sensor data fusion for activity recognition in mobile device space

    NASA Astrophysics Data System (ADS)

    Marhoubi, Asmaa H.; Saravi, Sara; Edirisinghe, Eran A.

    2015-05-01

    The present generation of mobile handheld devices comes equipped with a large number of sensors. The key sensors include the Ambient Light Sensor, Proximity Sensor, Gyroscope, Compass and the Accelerometer. Many mobile applications are driven based on the readings obtained from either one or two of these sensors. However the presence of multiple-sensors will enable the determination of more detailed activities that are carried out by the user of a mobile device, thus enabling smarter mobile applications to be developed that responds more appropriately to user behavior and device usage. In the proposed research we use recent advances in machine learning to fuse together the data obtained from all key sensors of a mobile device. We investigate the possible use of single and ensemble classifier based approaches to identify a mobile device's behavior in the space it is present. Feature selection algorithms are used to remove non-discriminant features that often lead to poor classifier performance. As the sensor readings are noisy and include a significant proportion of missing values and outliers, we use machine learning based approaches to clean the raw data obtained from the sensors, before use. Based on selected practical case studies, we demonstrate the ability to accurately recognize device behavior based on multi-sensor data fusion.

  3. Improving aquaporin Z expression in Escherichia coli by fusion partners and subsequent condition optimization.

    PubMed

    Lian, Jiazhang; Ding, Shinghua; Cai, Jin; Zhang, Danping; Xu, Zhinan; Wang, Xiaoning

    2009-03-01

    Aquaporin Z (AqpZ), a typical orthodox aquaporin with six transmembrane domains, was expressed as a fusion protein with TrxA in E. coli in our previous work. In the present study, three fusion partners (DsbA, GST and MBP) were employed to improve the expression level of this channel protein in E. coli. The result showed that, compared with the expression level of TrxA-AqpZ, five- to 40-fold increase in the productivity of AqpZ with fusion proteins was achieved by employing these different fusion partners, and MBP was the most efficient fusion partner to increase the expression level. By using E. coli C43 (DE3)/pMAL-AqpZ, the effects of different expression conditions were investigated systematically to improve the expression level of MBP-AqpZ in E. coli. The high productivity of MBP-AqpZ (200 mg/l) was achieved under optimized conditions. The present work provides a novel approach to improve the expression level of membrane proteins in E. coli.

  4. Some aspects of optimal human-computer symbiosis in multisensor geospatial data fusion

    NASA Astrophysics Data System (ADS)

    Levin, E.; Sergeyev, A.

    Nowadays vast amount of the available geospatial data provides additional opportunities for the targeting accuracy increase due to possibility of geospatial data fusion. One of the most obvious operations is determining of the targets 3D shapes and geospatial positions based on overlapped 2D imagery and sensor modeling. 3D models allows for the extraction of such information about targets, which cannot be measured directly based on single non-fused imagery. Paper describes ongoing research effort at Michigan Tech attempting to combine advantages of human analysts and computer automated processing for efficient human computer symbiosis for geospatial data fusion. Specifically, capabilities provided by integration into geospatial targeting interfaces novel human-computer interaction method such as eye-tracking and EEG was explored. Paper describes research performed and results in more details.

  5. A Sensor Fusion Method for Tracking Vertical Velocity and Height Based on Inertial and Barometric Altimeter Measurements

    PubMed Central

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-01-01

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04–0.24 m/s; height RMSE was in the range 5–68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions. PMID:25061835

  6. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  7. Flux Tensor Constrained Geodesic Active Contours with Sensor Fusion for Persistent Object Tracking

    PubMed Central

    Bunyak, Filiz; Palaniappan, Kannappan; Nath, Sumit Kumar; Seetharaman, Gunasekaran

    2007-01-01

    This paper makes new contributions in motion detection, object segmentation and trajectory estimation to create a successful object tracking system. A new efficient motion detection algorithm referred to as the flux tensor is used to detect moving objects in infrared video without requiring background modeling or contour extraction. The flux tensor-based motion detector when applied to infrared video is more accurate than thresholding ”hot-spots”, and is insensitive to shadows as well as illumination changes in the visible channel. In real world monitoring tasks fusing scene information from multiple sensors and sources is a useful core mechanism to deal with complex scenes, lighting conditions and environmental variables. The object segmentation algorithm uses level set-based geodesic active contour evolution that incorporates the fusion of visible color and infrared edge informations in a novel manner. Touching or overlapping objects are further refined during the segmentation process using an appropriate shape-based model. Multiple object tracking using correspondence graphs is extended to handle groups of objects and occlusion events by Kalman filter-based cluster trajectory analysis and watershed segmentation. The proposed object tracking algorithm was successfully tested on several difficult outdoor multispectral videos from stationary sensors and is not confounded by shadows or illumination variations. PMID:19096530

  8. Sensor fusion algorithms for the detection of nuclear material at border crossings

    NASA Astrophysics Data System (ADS)

    Nicol, David M.; Tsang, Rose; Ammerlahn, Heidi; Johnson, Michael M.

    2006-05-01

    In this age of global terrorism the ability to detect nuclear material (i.e., radioactive sources) at border crossings is of great importance. Today radiation sensing devices range from handheld detectors to large ultra high-resolution gamma-ray spectrometers. The latter are stationary devices and often used in the form of "portals" placed at border crossings. Currently, border crossings make use of these radiation portals as isolated passive devices. A threshold is set on each detector and any time that threshold is reached an alarm is triggered. Once an alarm is triggered it is assumed that first responders with handheld detectors will disperse in order to determine the location of the radioactive source. This manner of locating a threat is highly dependent upon the reaction of the first responders, thus causing unpredictable delays. We seek to better automate the identification of radioactive sources, by using sensor fusion algorithms which combine the data from multiple sensors (radioactive, RFID, vision) to deduce the location of the source. These algorithms capitalize upon the geometry of a "typical" border crossing layout where parallel lines of vehicles or people are queued so that the location can be computed in near real-time. We find that source location is quickly computable, using using as few as one sensitive radioactive detector, augmented by a visual or RFID tracking system.

  9. Fine Particle Sensor Based on Multi-Angle Light Scattering and Data Fusion.

    PubMed

    Shao, Wenjia; Zhang, Hongjian; Zhou, Hongliang

    2017-05-04

    Meteorological parameters such as relative humidity have a significant impact on the precision of PM2.5 measurement instruments based on light scattering. Instead of adding meteorological sensors or dehumidification devices used widely in commercial PM2.5 measurement instruments, a novel particle sensor based on multi-angle light scattering and data fusion is proposed to eliminate the effect of meteorological factors. Three photodiodes are employed to collect the scattered light flux at three distinct angles. Weather index is defined as the ratio of scattered light fluxes collected at the 40° and 55° angles, which can be used to distinguish the mass median diameter variation caused by different meteorological parameters. Simulations based on Lorenz-Mie theory and field experiments establish the feasibility of this scheme. Experimental results indicate that mass median diameter has less effect on the photodiode at the 55° angle in comparison with photodiodes at the 40° angle and 140° angle. After correction using the weather index, the photodiode at the 40° angle yielded the best results followed by photodiodes at the 55° angle and the 140° angle.

  10. Fine Particle Sensor Based on Multi-Angle Light Scattering and Data Fusion

    PubMed Central

    Shao, Wenjia; Zhang, Hongjian; Zhou, Hongliang

    2017-01-01

    Meteorological parameters such as relative humidity have a significant impact on the precision of PM2.5 measurement instruments based on light scattering. Instead of adding meteorological sensors or dehumidification devices used widely in commercial PM2.5 measurement instruments, a novel particle sensor based on multi-angle light scattering and data fusion is proposed to eliminate the effect of meteorological factors. Three photodiodes are employed to collect the scattered light flux at three distinct angles. Weather index is defined as the ratio of scattered light fluxes collected at the 40° and 55° angles, which can be used to distinguish the mass median diameter variation caused by different meteorological parameters. Simulations based on Lorenz-Mie theory and field experiments establish the feasibility of this scheme. Experimental results indicate that mass median diameter has less effect on the photodiode at the 55° angle in comparison with photodiodes at the 40° angle and 140° angle. After correction using the weather index, the photodiode at the 40° angle yielded the best results followed by photodiodes at the 55° angle and the 140° angle. PMID:28471406

  11. Flux Tensor Constrained Geodesic Active Contours with Sensor Fusion for Persistent Object Tracking.

    PubMed

    Bunyak, Filiz; Palaniappan, Kannappan; Nath, Sumit Kumar; Seetharaman, Gunasekaran

    2007-08-01

    This paper makes new contributions in motion detection, object segmentation and trajectory estimation to create a successful object tracking system. A new efficient motion detection algorithm referred to as the flux tensor is used to detect moving objects in infrared video without requiring background modeling or contour extraction. The flux tensor-based motion detector when applied to infrared video is more accurate than thresholding "hot-spots", and is insensitive to shadows as well as illumination changes in the visible channel. In real world monitoring tasks fusing scene information from multiple sensors and sources is a useful core mechanism to deal with complex scenes, lighting conditions and environmental variables. The object segmentation algorithm uses level set-based geodesic active contour evolution that incorporates the fusion of visible color and infrared edge informations in a novel manner. Touching or overlapping objects are further refined during the segmentation process using an appropriate shape-based model. Multiple object tracking using correspondence graphs is extended to handle groups of objects and occlusion events by Kalman filter-based cluster trajectory analysis and watershed segmentation. The proposed object tracking algorithm was successfully tested on several difficult outdoor multispectral videos from stationary sensors and is not confounded by shadows or illumination variations.

  12. Genetic algorithm application in optimization of wireless sensor networks.

    PubMed

    Norouzi, Ali; Zaim, A Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs.

  13. Genetic Algorithm Application in Optimization of Wireless Sensor Networks

    PubMed Central

    Norouzi, Ali; Zaim, A. Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235

  14. Ultrasonic Sensor Placement Optimization in Structural Health Monitoring Using Evolutionary Strategy

    NASA Astrophysics Data System (ADS)

    Gao, H.; Rose, J. L.

    2006-03-01

    In structural health monitoring (SHM), sensor network scale and sensor distribution decisions are critical since sensor network performance and system cost are greatly affected. A quantitative sensor placement optimization method with covariance matrix adaptation evolutionary strategy (CMAES) is presented in this paper. A damage detection probability model is developed for ultrasonic guided wave sensor networks. Two sample problems are presented in this paper. One is for structure with irregular damage distribution probability, and the other is for an E2 aircraft wing section. The reliability of this genetic and evolutionary optimization method is proved in this study. Sensor network configurations with minimum missed-detection probability are obtained from the results of evolutionary optimization. The tradeoff relationship between optimized sensor network performance and the number of sensors is also presented in this paper.

  15. Pixel-by-pixel VIS/NIR and LIR sensor fusion system

    NASA Astrophysics Data System (ADS)

    Zhang, Evan; Zhang, James S.; Song, Vivian W.; Chin, Ken P.; Hu, Gelbert

    2003-01-01

    Visible (VIS) camera (such as CCD) or Near Infrared (NIR) camera (such as low light level CCD or image intensifier) has high resolution and is easy to distinguish enemy and foe, but it cannot see through thin fog/cloud, heavy smoke/dust, foliage, camouflage, and darkness. The Long Infrared (LIR) imager can overcome above problems, but the resolution is too low and it cannot see the NIR aiming light from enemy. The best solution is to fuse the VIS/NIR and LIR sensors to overcome their shortcomings and take advantages of both sensors. In order to see the same target without parallax, the fusio system must have a common optical aperature. In this paper, three common optical apertures are designed: common reflective objective lens, common beam splitter, and common transmissive objective lens. The first one has very small field of view and the second one needs two heads, so the best choice is the third one, but we must find suitable optical materials and correct the color aberrations from 0.6 to 12 μ. It is a tough job. By choosing ZnSe as the first common piece of the objective lens and using glass for NIR and Ge (or IR glass) for LIR as rest pieces, we only need to and are able to correct the aberrations from 0.6 to 1.0 μ for NIR and from 8 to 12 μ for LIR. Finally, a common reflective objective lens and the common beam splitter are also successfully designed. Five application examples are given. In the digital signal processing, we use only one Altera chip. After inserting data, scaling the image size, and adjusting the signal level, the LIR will have the same format and same pixel number of the VIS/NIR, so real-time pixel-by-pixel sensor fusion is realized. The digital output can be used for further image processing and automatic target recognition, such as if we overlap the LIR image on the VIS/NIR image for missile guidance or rifle sight we don't need to worry about the time and the environment again. A gum-size wireless transmitter is also designed that is

  16. Optimal sensor placement for modal testing on wind turbines

    NASA Astrophysics Data System (ADS)

    Schulze, Andreas; Zierath, János; Rosenow, Sven-Erik; Bockhahn, Reik; Rachholz, Roman; Woernle, Christoph

    2016-09-01

    The mechanical design of wind turbines requires a profound understanding of the dynamic behaviour. Even though highly detailed simulation models are already in use to support wind turbine design, modal testing on a real prototype is irreplaceable to identify site-specific conditions such as the stiffness of the tower foundation. Correct identification of the mode shapes of a complex mechanical structure much depends on the placement of the sensors. For operational modal analysis of a 3 MW wind turbine with a 120 m rotor on a 100 m tower developed by W2E Wind to Energy, algorithms for optimal placement of acceleration sensors are applied. The mode shapes used for the optimisation are calculated by means of a detailed flexible multibody model of the wind turbine. Among the three algorithms in this study, the genetic algorithm with weighted off-diagonal criterion yields the sensor configuration with the highest quality. The ongoing measurements on the prototype will be the basis for the development of optimised wind turbine designs.

  17. Data dimensionality reduction and data fusion for fast characterization of green coffee samples using hyperspectral sensors.

    PubMed

    Calvini, Rosalba; Foca, Giorgia; Ulrici, Alessandro

    2016-10-01

    Hyperspectral sensors represent a powerful tool for chemical mapping of solid-state samples, since they provide spectral information localized in the image domain in very short times and without the need of sample pretreatment. However, due to the large data size of each hyperspectral image, data dimensionality reduction (DR) is necessary in order to develop hyperspectral sensors for real-time monitoring of large sets of samples with different characteristics. In particular, in this work, we focused on DR methods to convert the three-dimensional data array corresponding to each hyperspectral image into a one-dimensional signal (1D-DR), which retains spectral and/or spatial information. In this way, large datasets of hyperspectral images can be converted into matrices of signals, which in turn can be easily processed using suitable multivariate statistical methods. Obviously, different 1D-DR methods highlight different aspects of the hyperspectral image dataset. Therefore, in order to investigate their advantages and disadvantages, in this work, we compared three different 1D-DR methods: average spectrum (AS), single space hyperspectrogram (SSH) and common space hyperspectrogram (CSH). In particular, we have considered 370 NIR-hyperspectral images of a set of green coffee samples, and the three 1D-DR methods were tested for their effectiveness in sensor fault detection, data structure exploration and sample classification according to coffee variety and to coffee processing method. Principal component analysis and partial least squares-discriminant analysis were used to compare the three separate DR methods. Furthermore, low-level and mid-level data fusion was also employed to test the advantages of using AS, SSH and CSH altogether. Graphical Abstract Key steps in hyperspectral data dimenionality reduction.

  18. A millimeter wave image fusion algorithm design and optimization based on CDF97 wavelet transform

    NASA Astrophysics Data System (ADS)

    Yu, Jian-cheng; Chen, Bo-yang; Xia, A.-lin; Liu, Xin-guang

    2011-08-01

    region. Based on this assumption, a new fusion rule is proposed here. Firstly, get the low-frequency part of the selection matrix according to the comparison regional low-frequency energy matrix of the two images. Then, taking into account the consistency and continuity of low-frequency detail, high frequency and low frequency selection matrix can be the same. And this can ensure a better fusion of the edge of the image characteristics. Simulation results show that this algorithm's performance is much better compared to traditional energy-weighted and average method, though less quantitative comparison point compared to the region based method, yet, the difference is small according to the visible evaluation. This algorithm is tested in the self-developed image processing platform based on DM642. By using the optimization strategy, the speed of 256 × 256 dual-channel image fusion can be more than 28 F/S. Therefore, the proposed fusion algorithm can meet the system performance and application requirements.

  19. Embry-Riddle Aeronautical University multispectral sensor and data fusion laboratory: a model for distributed research and education

    NASA Astrophysics Data System (ADS)

    McMullen, Sonya A. H.; Henderson, Troy; Ison, David

    2017-05-01

    The miniaturization of unmanned systems and spacecraft, as well as computing and sensor technologies, has opened new opportunities in the areas of remote sensing and multi-sensor data fusion for a variety of applications. Remote sensing and data fusion historically have been the purview of large government organizations, such as the Department of Defense (DoD), National Aeronautics and Space Administration (NASA), and National Geospatial-Intelligence Agency (NGA) due to the high cost and complexity of developing, fielding, and operating such systems. However, miniaturized computers with high capacity processing capabilities, small and affordable sensors, and emerging, commercially available platforms such as UAS and CubeSats to carry such sensors, have allowed for a vast range of novel applications. In order to leverage these developments, Embry-Riddle Aeronautical University (ERAU) has developed an advanced sensor and data fusion laboratory to research component capabilities and their employment on a wide-range of autonomous, robotic, and transportation systems. This lab is unique in several ways, for example, it provides a traditional campus laboratory for students and faculty to model and test sensors in a range of scenarios, process multi-sensor data sets (both simulated and experimental), and analyze results. Moreover, such allows for "virtual" modeling, testing, and teaching capability reaching beyond the physical confines of the facility for use among ERAU Worldwide students and faculty located around the globe. Although other institutions such as Georgia Institute of Technology, Lockheed Martin, University of Dayton, and University of Central Florida have optical sensor laboratories, the ERAU virtual concept is the first such lab to expand to multispectral sensors and data fusion, while focusing on the data collection and data products and not on the manufacturing aspect. Further, the initiative is a unique effort among Embry-Riddle faculty to develop multi

  20. Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications

    PubMed Central

    Luo, Yingting; Zhu, Yunmin; Luo, Dandan; Zhou, Jie; Song, Enbin; Wang, Donghua

    2008-01-01

    This paper proposes a new distributed Kalman filtering fusion with random state transition and measurement matrices, i.e., random parameter matrices Kalman filtering. It is proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements; therefore, it achieves the best performance. More importantly, this result can be applied to Kalman filtering with uncertain observations including the measurement with a false alarm probability as a special case, as well as, randomly variant dynamic systems with multiple models. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices. PMID:27873977

  1. Globally Optimal Multisensor Distributed Random Parameter Matrices Kalman Filtering Fusion with Applications.

    PubMed

    Luo, Yingting; Zhu, Yunmin; Luo, Dandan; Zhou, Jie; Song, Enbin; Wang, Donghua

    2008-12-08

    This paper proposes a new distributed Kalman filtering fusion with random state transition and measurement matrices, i.e., random parameter matrices Kalman filtering. It is proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements; therefore, it achieves the best performance. More importantly, this result can be applied to Kalman filtering with uncertain observations including the measurement with a false alarm probability as a special case, as well as, randomly variant dynamic systems with multiple models. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices.

  2. Optimization design of the coil of the eddy current sensor

    NASA Astrophysics Data System (ADS)

    Pu, Tiecheng; Fan, Shangchun

    2006-11-01

    An eddy current sensor is usually used to measure the departure of a shaft from its axes, in order to avoid destroying the system because of collision. The design of the coil as the sense organ of an eddy current sensor is to search a set of proper sizes (includes the outer radius, the inner radius and tallness of the coil) in which the quality factor and the grads of magnetic field strength is great as soon as possible but the length of the lead is not much long. So an optimization function is introduced here for efficient design. This function is direct ratio with the quality factor of the core and the magnetic grads product by the coil and inverse ratio with the lead length. The proportions of three parameters can be changed according to the instance. When the value of the function reaches the maximum, the sizes of coil are the anticipant optimal sizes and the integration capability of the coil is at the high-point. To search the maximum of the function, the genetic algorithm is adopted. The simulation result by Matlab proves the practicability of the method.

  3. Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines

    NASA Astrophysics Data System (ADS)

    Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian

    2016-11-01

    Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.

  4. The effect of prediction error correlation on optimal sensor placement in structural dynamics

    NASA Astrophysics Data System (ADS)

    Papadimitriou, Costas; Lombaert, Geert

    2012-04-01

    The problem of estimating the optimal sensor locations for parameter estimation in structural dynamics is re-visited. The effect of spatially correlated prediction errors on the optimal sensor placement is investigated. The information entropy is used as a performance measure of the sensor configuration. The optimal sensor location is formulated as an optimization problem involving discrete-valued variables, which is solved using computationally efficient sequential sensor placement algorithms. Asymptotic estimates for the information entropy are used to develop useful properties that provide insight into the dependence of the information entropy on the number and location of sensors. A theoretical analysis shows that the spatial correlation length of the prediction errors controls the minimum distance between the sensors and should be taken into account when designing optimal sensor locations with potential sensor distances up to the order of the characteristic length of the dynamic problem considered. Implementation issues for modal identification and structural-related model parameter estimation are addressed. Theoretical and computational developments are illustrated by designing the optimal sensor configurations for a continuous beam model, a discrete chain-like stiffness-mass model and a finite element model of a footbridge in Wetteren (Belgium). Results point out the crucial effect the spatial correlation of the prediction errors have on the design of optimal sensor locations for structural dynamics applications, revealing simultaneously potential inadequacies of spatially uncorrelated prediction errors models.

  5. Vis-NIRS sensor fusion for local, regional and global calibrations of SOC content

    NASA Astrophysics Data System (ADS)

    Knadel, M.; Deng, F.; Thomsen, A.; Greve, M. H.

    2012-04-01

    The considerable potential of sensor fusion has been recognised and is finding application in soil mapping. Visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy (DRS) has proven to be an efficient method for measuring soil physical, chemical and biological properties. The objective of this study was to determine whether a Danish national soil spectral library spiked with local samples can give better SOC predictions than the regional or local calibrations alone. It was also investigated if the national library developed using a laboratory sensor can be used to predict field-generated data by the mobile sensor platform (MSP). Local and regional SOC models were based on data obtained from individual fields and from the compilation of the data obtained from six fields in one model, respectively. The prerequisite for using vis-NIR for soil analysis is the development of a soil spectral library. Such library should be based on representative samples for the future application. We selected 2851 samples from a diverse archive of Danish soils and perform a global calibration. Additionally, the national spectral library was spiked with some local samples obtained from the fields under study. Local, regional and global spiked SOC models were compared. The best results from the partial least squares regression (PLSR) were obtained for the regional calibration (RMSEP=0.39, r2=0.93 and RPD=4), followed by the calibrations from the library spiked with local MSP measurements (RMSEP=0.38, r2=0.84, RPD=2.5). Finally, kriging maps of SOC content were validated. The highest root mean square error of prediction (5.4) was generated by the map based on the regional calibration model. The lowest RMSEP (4.1), however, was found for the map generated from the global library spiked with the local samples acquired by MSP. The results from this study show that the national spectral library established using a laboratory sensor can deliver good predictive abilities of SOC on field

  6. Optimization of tritium breeding and shielding analysis to plasma in ITER fusion reactor

    SciTech Connect

    Indah Rosidah, M. Suud, Zaki; Yazid, Putranto Ilham

    2015-09-30

    The development of fusion energy is one of the important International energy strategies with the important milestone is ITER (International Thermonuclear Experimental Reactor) project, initiated by many countries, such as: America, Europe, and Japan who agreed to set up TOKAMAK type fusion reactor in France. In ideal fusion reactor the fuel is purely deuterium, but it need higher temperature of reactor. In ITER project the fuels are deuterium and tritium which need lower temperature of the reactor. In this study tritium for fusion reactor can be produced by using reaction of lithium with neutron in the blanket region. With the tritium breeding blanket which react between Li-6 in the blanket with neutron resulted from the plasma region. In this research the material used in each layer surrounding the plasma in the reactor is optimized. Moreover, achieving self-sufficiency condition in the reactor in order tritium has enough availability to be consumed for a long time. In order to optimize Tritium Breeding Ratio (TBR) value in the fusion reactor, there are several strategies considered here. The first requirement is making variation in Li-6 enrichment to be 60%, 70%, and 90%. But, the result of that condition can not reach TBR value better than with no enrichment. Because there is reduction of Li-7 percent when increasing Li-6 percent. The other way is converting neutron multiplier material with Pb. From this, we get TBR value better with the Be as neutron multiplier. Beside of TBR value, fusion reactor can analyze the distribution of neutron flux and dose rate of neutron to know the change of neutron concentration for each layer in reactor. From the simulation in this study, 97% neutron concentration can be absorbed by material in reactor, so it is good enough. In addition, it is required to analyze spectrum neutron energy in many layers in the fusion reactor such as in blanket, coolant, and divertor. Actually material in that layer can resist in high temperature

  7. Optimization of tritium breeding and shielding analysis to plasma in ITER fusion reactor

    NASA Astrophysics Data System (ADS)

    Indah Rosidah, M.; Suud, Zaki; Yazid, Putranto Ilham

    2015-09-01

    The development of fusion energy is one of the important International energy strategies with the important milestone is ITER (International Thermonuclear Experimental Reactor) project, initiated by many countries, such as: America, Europe, and Japan who agreed to set up TOKAMAK type fusion reactor in France. In ideal fusion reactor the fuel is purely deuterium, but it need higher temperature of reactor. In ITER project the fuels are deuterium and tritium which need lower temperature of the reactor. In this study tritium for fusion reactor can be produced by using reaction of lithium with neutron in the blanket region. With the tritium breeding blanket which react between Li-6 in the blanket with neutron resulted from the plasma region. In this research the material used in each layer surrounding the plasma in the reactor is optimized. Moreover, achieving self-sufficiency condition in the reactor in order tritium has enough availability to be consumed for a long time. In order to optimize Tritium Breeding Ratio (TBR) value in the fusion reactor, there are several strategies considered here. The first requirement is making variation in Li-6 enrichment to be 60%, 70%, and 90%. But, the result of that condition can not reach TBR value better than with no enrichment. Because there is reduction of Li-7 percent when increasing Li-6 percent. The other way is converting neutron multiplier material with Pb. From this, we get TBR value better with the Be as neutron multiplier. Beside of TBR value, fusion reactor can analyze the distribution of neutron flux and dose rate of neutron to know the change of neutron concentration for each layer in reactor. From the simulation in this study, 97% neutron concentration can be absorbed by material in reactor, so it is good enough. In addition, it is required to analyze spectrum neutron energy in many layers in the fusion reactor such as in blanket, coolant, and divertor. Actually material in that layer can resist in high temperature

  8. Fusion rule estimation in multiple sensor systems with unknown noise distributions

    SciTech Connect

    Rao, N.S.V.

    1993-12-31

    A system of N sensors S{sub 1},S{sub 2},{hor_ellipsis},S{sub N} is considered; corresponding to an object with parameter x {epsilon} R{sup d}, sensor S{sub i} yields output y{sup (i)} {epsilon} R{sup d} according to an unknown probability distribution p{sub i}(y{sup (i)}{vert_bar}x). A training l-sample (x{sub 1},y{sub 1}),(x{sub 2},y{sub 2}),{hor_ellipsis},(x{sub l},y{sub l}) is given where y{sub i} = (y{sub i}{sup (1)}, y{sub i}{sup (2)},{hor_ellipsis},y{sub i}{sup (N)}) and y{sub i}{sup (j)} is the output of S{sub j} in response to input x{sub i}. The problem is to estimate a fusion rule f:R{sup Nd} {yields} R{sup d}, based on the sample, such that the expected square error I(f) = {integral}[x {minus} f(y{sup (1)},y{sup (2)}, {hor_ellipsis},y{sup (N)})]{sup 2}p(y{sup (1)},y{sup (2)}, {hor_ellipsis},y{sup (N)}{vert_bar}x)p(x)dy{sup (1)}dy{sup (2)}{hor_ellipsis}dy{sup (N)}dx is to be minimized over a family of fusion rules {Lambda} based on the given l-sample. Let f{sub *} {epsilon} {Lambda} minimize I(f); f{sub *} cannot be computed since the underlying probability distributions are unknown. Using Vapnik`s empirical risk minimization method, we show that if {Lambda} has finite capacity, then under bounded error, for sufficiently large sample, f{sub emp} can be obtained such that P[I(f{sub emp}) {minus} I(f{sub *}) > {epsilon}] < {delta} for arbitrarily specified {epsilon} > 0 and {delta}, 0 < {delta} < 1. We identify several computational methods to obtain f{sub emp} or its approximations based on neural networks, radial basis functions, wavelets, non-polynomial networks, and polynomials and splines. We then discuss linearly separable systems to identify objects from a finite class where f{sub emp} can be computed in polynomial time using quadratic programming methods.

  9. Stochastic approximation methods for fusion-rule estimation in multiple sensor systems

    SciTech Connect

    Rao, N.S.V.

    1994-06-01

    A system of N sensors S{sub 1}, S{sub 2},{hor_ellipsis},S{sub N} is considered; corresponding to an object with parameter x {element_of} {Re}{sup d}, sensor S{sub i} yields output y{sup (i)}{element_of}{Re}{sup d} according to an unknown probability distribution p{sub i}(y{sup (i)}{vert_bar}x). A training l-sample (x{sub 1}, y{sub 1}), (x{sub 2}, y{sub 2}),{hor_ellipsis},(x{sub l}, y{sub l}) is given where y{sub i} = (y{sub i}({sup 1}), y{sub i}({sup 2}),{hor_ellipsis},y{sub i}({sup N}) and y{sub i}({sup j}) is the output of S{sub j} in response to input X{sub i}. The problem is to estimate a fusion rule f : {Re}{sup Nd} {yields} {Re}{sup d}, based on the sample, such that the expected square error I(f) = {integral}[x {minus} f(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N})]{sup 2} p(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N}){vert_bar}x)p(x)dy{sup 1}dy{sup 2} {hor_ellipsis} dy{sup N}dx is to be minimized over a family of fusion rules {lambda} based on the given l-sample. Let f{sub *} {element_of} {lambda} minimize I(f); f{sub *} cannot be computed since the underlying probability distributions are unknown. Three stochastic approximation methods are presented to compute {cflx f}, such that under suitable conditions, for sufficiently large sample, P[I{cflx f} {minus} I(f{sub *}) > {epsilon}] < {delta} for arbitrarily specified {epsilon} > 0 and {delta}, 0 < {delta} < 1. The three methods are based on Robbins-Monro style algorithms, empirical risk minimization, and regression estimation algorithms.

  10. Human Arm Motion Tracking by Orientation-Based Fusion of Inertial Sensors and Kinect Using Unscented Kalman Filter.

    PubMed

    Atrsaei, Arash; Salarieh, Hassan; Alasty, Aria

    2016-09-01

    Due to various applications of human motion capture techniques, developing low-cost methods that would be applicable in nonlaboratory environments is under consideration. MEMS inertial sensors and Kinect are two low-cost devices that can be utilized in home-based motion capture systems, e.g., home-based rehabilitation. In this work, an unscented Kalman filter approach was developed based on the complementary properties of Kinect and the inertial sensors to fuse the orientation data of these two devices for human arm motion tracking during both stationary shoulder joint position and human body movement. A new measurement model of the fusion algorithm was obtained that can compensate for the inertial sensors drift problem in high dynamic motions and also joints occlusion in Kinect. The efficiency of the proposed algorithm was evaluated by an optical motion tracker system. The errors were reduced by almost 50% compared to cases when either inertial sensor or Kinect measurements were utilized.

  11. Experimental measurement of oil-water two-phase flow by data fusion of electrical tomography sensors and venturi tube

    NASA Astrophysics Data System (ADS)

    Liu, Yinyan; Deng, Yuchi; Zhang, Maomao; Yu, Peining; Li, Yi

    2017-09-01

    Oil-water two-phase flows are commonly found in the production processes of the petroleum industry. Accurate online measurement of flow rates is crucial to ensure the safety and efficiency of oil exploration and production. A research team from Tsinghua University has developed an experimental apparatus for multiphase flow measurement based on an electrical capacitance tomography (ECT) sensor, an electrical resistance tomography (ERT) sensor, and a venturi tube. This work presents the phase fraction and flow rate measurements of oil-water two-phase flows based on the developed apparatus. Full-range phase fraction can be obtained by the combination of the ECT sensor and the ERT sensor. By data fusion of differential pressures measured by venturi tube and the phase fraction, the total flow rate and single-phase flow rate can be calculated. Dynamic experiments were conducted on the multiphase flow loop in horizontal and vertical pipelines and at various flow rates.

  12. Power optimization in body sensor networks: the case of an autonomous wireless EMG sensor powered by PV-cells.

    PubMed

    Penders, J; Pop, V; Caballero, L; van de Molengraft, J; van Schaijk, R; Vullers, R; Van Hoof, C

    2010-01-01

    Recent advances in ultra-low-power circuits and energy harvesters are making self-powered body sensor nodes a reality. Power optimization at the system and application level is crucial in achieving ultra-low-power consumption for the entire system. This paper reviews system-level power optimization techniques, and illustrates their impact on the case of autonomous wireless EMG monitoring. The resulting prototype, an Autonomous wireless EMG sensor power by PV-cells, is presented.

  13. Trunk Motion System (TMS) Using Printed Body Worn Sensor (BWS) via Data Fusion Approach

    PubMed Central

    Mokhlespour Esfahani, Mohammad Iman; Zobeiri, Omid; Moshiri, Behzad; Narimani, Roya; Mehravar, Mohammad; Rashedi, Ehsan; Parnianpour, Mohamad

    2017-01-01

    Human movement analysis is an important part of biomechanics and rehabilitation, for which many measurement systems are introduced. Among these, wearable devices have substantial biomedical applications, primarily since they can be implemented both in indoor and outdoor applications. In this study, a Trunk Motion System (TMS) using printed Body-Worn Sensors (BWS) is designed and developed. TMS can measure three-dimensional (3D) trunk motions, is lightweight, and is a portable and non-invasive system. After the recognition of sensor locations, twelve BWSs were printed on stretchable clothing with the purpose of measuring the 3D trunk movements. To integrate BWSs data, a neural network data fusion algorithm was used. The outcome of this algorithm along with the actual 3D anatomical movements (obtained by Qualisys system) were used to calibrate the TMS. Three healthy participants with different physical characteristics participated in the calibration tests. Seven different tasks (each repeated three times) were performed, involving five planar, and two multiplanar movements. Results showed that the accuracy of TMS system was less than 1.0°, 0.8°, 0.6°, 0.8°, 0.9°, and 1.3° for flexion/extension, left/right lateral bending, left/right axial rotation, and multi-planar motions, respectively. In addition, the accuracy of TMS for the identified movement was less than 2.7°. TMS, developed to monitor and measure the trunk orientations, can have diverse applications in clinical, biomechanical, and ergonomic studies to prevent musculoskeletal injuries, and to determine the impact of interventions. PMID:28075342

  14. Trunk Motion System (TMS) Using Printed Body Worn Sensor (BWS) via Data Fusion Approach.

    PubMed

    Mokhlespour Esfahani, Mohammad Iman; Zobeiri, Omid; Moshiri, Behzad; Narimani, Roya; Mehravar, Mohammad; Rashedi, Ehsan; Parnianpour, Mohamad

    2017-01-08

    Human movement analysis is an important part of biomechanics and rehabilitation, for which many measurement systems are introduced. Among these, wearable devices have substantial biomedical applications, primarily since they can be implemented both in indoor and outdoor applications. In this study, a Trunk Motion System (TMS) using printed Body-Worn Sensors (BWS) is designed and developed. TMS can measure three-dimensional (3D) trunk motions, is lightweight, and is a portable and non-invasive system. After the recognition of sensor locations, twelve BWSs were printed on stretchable clothing with the purpose of measuring the 3D trunk movements. To integrate BWSs data, a neural network data fusion algorithm was used. The outcome of this algorithm along with the actual 3D anatomical movements (obtained by Qualisys system) were used to calibrate the TMS. Three healthy participants with different physical characteristics participated in the calibration tests. Seven different tasks (each repeated three times) were performed, involving five planar, and two multiplanar movements. Results showed that the accuracy of TMS system was less than 1.0°, 0.8°, 0.6°, 0.8°, 0.9°, and 1.3° for flexion/extension, left/right lateral bending, left/right axial rotation, and multi-planar motions, respectively. In addition, the accuracy of TMS for the identified movement was less than 2.7°. TMS, developed to monitor and measure the trunk orientations, can have diverse applications in clinical, biomechanical, and ergonomic studies to prevent musculoskeletal injuries, and to determine the impact of interventions.

  15. Optimal periodic cooperative spectrum sensing based on weight fusion in cognitive radio networks.

    PubMed

    Liu, Xin; Jia, Min; Gu, Xuemai; Tan, Xuezhi

    2013-04-19

    The performance of cooperative spectrum sensing in cognitive radio (CR) networks depends on the sensing mode, the sensing time and the number of cooperative users. In order to improve the sensing performance and reduce the interference to the primary user (PU), a periodic cooperative spectrum sensing model based on weight fusion is proposed in this paper. Moreover, the sensing period, the sensing time and the searching time are optimized, respectively. Firstly the sensing period is optimized to improve the spectrum utilization and reduce the interference, then the joint optimization algorithm of the local sensing time and the number of cooperative users, is proposed to obtain the optimal sensing time for improving the throughput of the cognitive radio user (CRU) during each period, and finally the water-filling principle is applied to optimize the searching time in order to make the CRU find an idle channel within the shortest time. The simulation results show that compared with the previous algorithms, the optimal sensing period can improve the spectrum utilization of the CRU and decrease the interference to the PU significantly, the optimal sensing time can make the CRU achieve the largest throughput, and the optimal searching time can make the CRU find an idle channel with the least time.

  16. Sensors Fusion based Online Mapping and Features Extraction of Mobile Robot in the Road Following and Roundabout

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Mailah, Musa; Yussof, Wan Azhar B.; Hamedon, Zamzuri B.; Yussof, Zulkifli B.; Majeed, Anwar P. P.

    2016-02-01

    A road feature extraction based mapping system using a sensor fusion technique for mobile robot navigation in road environments is presented in this paper. The online mapping of mobile robot is performed continuously in the road environments to find the road properties that enable the robot to move from a certain start position to pre-determined goal while discovering and detecting the roundabout. The sensors fusion involving laser range finder, camera and odometry which are installed in a new platform, are used to find the path of the robot and localize it within its environments. The local maps are developed using camera and laser range finder to recognize the roads borders parameters such as road width, curbs and roundabout. Results show the capability of the robot with the proposed algorithms to effectively identify the road environments and build a local mapping for road following and roundabout.

  17. Temporally optimized spanwise vorticity sensor measurements in turbulent boundary layers

    NASA Astrophysics Data System (ADS)

    Morrill-Winter, C.; Klewicki, J.; Baidya, R.; Marusic, I.

    2015-12-01

    Multi-element hot-wire anemometry was used to measure spanwise vorticity fluctuations in turbulent boundary layers. Smooth wall boundary layer profiles, with very good spatial and temporal resolution, were acquired over a Kármán number range of 2000-12,700 at the Melbourne Wind Tunnel at the University of Melbourne and the University of New Hampshire's Flow Physics Facility. A custom hot-wire probe was necessary to simultaneously obtain velocity and spanwise vorticity measurements centered at a fixed point in space. A custom calibration/processing scheme was developed to utilize single-wall-parallel wires to optimize the accuracy of the measured wall-normal velocity fluctuations derived from the sensor's ×-array.

  18. Analysis and optimization of Love wave liquid sensors.

    PubMed

    Jakoby, B; Vellekoop, M J

    1998-01-01

    Love wave sensors are highly sensitive microacoustic devices, which are well suited for liquid sensing applications thanks to the shear polarization of the wave. The sensing mechanism thereby relies on the mechanical (or acoustic) interaction of the device with the liquid. The successful utilization of Love wave devices for this purpose requires proper shielding to avoid unwanted electric interaction of the liquid with the wave and the transducers. In this work we describe the effects of this electric interaction and the proper design of a shield to prevent it. We present analysis methods, which illustrate the impact of the interaction and which help to obtain an optimized design of the proposed shield. We also present experimental results for devices that have been fabricated according to these design rules.

  19. Developing Fast Fluorescent Protein Voltage Sensors by Optimizing FRET Interactions

    PubMed Central

    Sung, Uhna; Sepehri-Rad, Masoud; Piao, Hong Hua; Jin, Lei; Hughes, Thomas; Cohen, Lawrence B.; Baker, Bradley J.

    2015-01-01

    FRET (Förster Resonance Energy Transfer)-based protein voltage sensors can be useful for monitoring neuronal activity in vivo because the ratio of signals between the donor and acceptor pair reduces common sources of noise such as heart beat artifacts. We improved the performance of FRET based genetically encoded Fluorescent Protein (FP) voltage sensors by optimizing the location of donor and acceptor FPs flanking the voltage sensitive domain of the Ciona intestinalis voltage sensitive phosphatase. First, we created 39 different “Nabi1” constructs by positioning the donor FP, UKG, at 8 different locations downstream of the voltage-sensing domain and the acceptor FP, mKO, at 6 positions upstream. Several of these combinations resulted in large voltage dependent signals and relatively fast kinetics. Nabi1 probes responded with signal size up to 11% ΔF/F for a 100 mV depolarization and fast response time constants both for signal activation (~2 ms) and signal decay (~3 ms). We improved expression in neuronal cells by replacing the mKO and UKG FRET pair with Clover (donor FP) and mRuby2 (acceptor FP) to create Nabi2 probes. Nabi2 probes also had large signals and relatively fast time constants in HEK293 cells. In primary neuronal culture, a Nabi2 probe was able to differentiate individual action potentials at 45 Hz. PMID:26587834

  20. Distributed Optimal Power and Rate Control in Wireless Sensor Networks

    PubMed Central

    Tang, Meiqin; Bai, Jianyong; Li, Jing; Xin, Yalin

    2014-01-01

    With the rapid development of wireless sensor networks, reducing energy consumption is becoming one of the important factors to extend node lifetime, and it is necessary to adjust the launching power of each node because of the limited energy available to the sensor nodes in the networks. This paper proposes a power and rate control model based on the network utility maximization (NUM) framework, where a weighting factor is used to reflect the influence degree of the sending power and transmission rate to the utility function. In real networks, nodes interfere with each other in the procedure of transmitting signal, which may lead to signal transmission failure and may negatively have impacts on networks throughput. Using dual decomposition techniques, the NUM problem is decomposed into two distributed subproblems, and then the conjugate gradient method is applied to solve the optimization problem with the calculation of the Hessian matrix and its inverse in order to guarantee fast convergence of the algorithm. The convergence proof is also provided in this paper. Numerical examples show that the proposed solution achieves significant throughput compared with exiting approaches. PMID:24895654

  1. Distributed optimal power and rate control in wireless sensor networks.

    PubMed

    Tang, Meiqin; Bai, Jianyong; Li, Jing; Xin, Yalin

    2014-01-01

    With the rapid development of wireless sensor networks, reducing energy consumption is becoming one of the important factors to extend node lifetime, and it is necessary to adjust the launching power of each node because of the limited energy available to the sensor nodes in the networks. This paper proposes a power and rate control model based on the network utility maximization (NUM) framework, where a weighting factor is used to reflect the influence degree of the sending power and transmission rate to the utility function. In real networks, nodes interfere with each other in the procedure of transmitting signal, which may lead to signal transmission failure and may negatively have impacts on networks throughput. Using dual decomposition techniques, the NUM problem is decomposed into two distributed subproblems, and then the conjugate gradient method is applied to solve the optimization problem with the calculation of the Hessian matrix and its inverse in order to guarantee fast convergence of the algorithm. The convergence proof is also provided in this paper. Numerical examples show that the proposed solution achieves significant throughput compared with exiting approaches.

  2. Results of a Feasibility Study on Sensor Data Fusion for the CP-140 Aurora Maritime Patrol Aircraft

    DTIC Science & Technology

    1996-02-01

    all services, (2) establish a forum for the exchange of research and technology , and (3) develop models , terminology and a taxonomy of the areas of...is proposed with recoverable steps where different level of fusion sophistication can be implemented based on the availability of the technology and...of the technology and the actual status of the sensors on the aircraft. The results presented in this report will be used to derive reasonable and

  3. Optimized swimmer tracking system by a dynamic fusion of correlation and color histogram techniques

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2015-12-01

    To design a robust swimmer tracking system, we took into account two well-known tracking techniques: the nonlinear joint transform correlation (NL-JTC) and the color histogram. The two techniques perform comparably well, yet they both have substantial limitations. Interestingly, they also seem to show some complementarity. The correlation technique yields accurate detection but is sensitive to rotation, scale and contour deformation, whereas the color histogram technique is robust for rotation and contour deformation but shows low accuracy and is highly sensitive to luminosity and confusing background colors. These observations suggested the possibility of a dynamic fusion of the correlation plane and the color scores map. Before this fusion, two steps are required. First is the extraction of a sub-plane of correlation that describes the similarity between the reference and target images. This sub-plane has the same size as the color scores map but they have different interval values. Thus, the second step is required which is the normalization of the planes in the same interval so they can be fused. In order to determine the benefits of this fusion technique, first, we tested it on a synthetic image containing different forms with different colors. We thus were able to optimize the correlation plane and color histogram techniques before applying our fusion technique to real videos of swimmers in international competitions. Last, a comparative study of the dynamic fusion technique and the two classical techniques was carried out to demonstrate the efficacy of the proposed technique. The criteria of comparison were the tracking percentage, the peak to correlation energy (PCE), which evaluated the sharpness of the peak (accuracy), and the local standard deviation (Local-STD), which assessed the noise in the planes (robustness).

  4. Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.

    PubMed

    Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing

    2011-01-01

    In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.

  5. A Locomotion Intent Prediction System Based on Multi-Sensor Fusion

    PubMed Central

    Chen, Baojun; Zheng, Enhao; Wang, Qining

    2014-01-01

    Locomotion intent prediction is essential for the control of powered lower-limb prostheses to realize smooth locomotion transitions. In this research, we develop a multi-sensor fusion based locomotion intent prediction system, which can recognize current locomotion mode and detect locomotion transitions in advance. Seven able-bodied subjects were recruited for this research. Signals from two foot pressure insoles and three inertial measurement units (one on the thigh, one on the shank and the other on the foot) are measured. A two-level recognition strategy is used for the recognition with linear discriminate classifier. Six kinds of locomotion modes and ten kinds of locomotion transitions are tested in this study. Recognition accuracy during steady locomotion periods (i.e., no locomotion transitions) is 99.71% ± 0.05% for seven able-bodied subjects. During locomotion transition periods, all the transitions are correctly detected and most of them can be detected before transiting to new locomotion modes. No significant deterioration in recognition performance is observed in the following five hours after the system is trained, and small number of experiment trials are required to train reliable classifiers. PMID:25014097

  6. Fuzzy Adaptive Interacting Multiple Model Nonlinear Filter for Integrated Navigation Sensor Fusion

    PubMed Central

    Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing

    2011-01-01

    In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF. PMID:22319400

  7. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information.

    PubMed

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-10-27

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.

  8. Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization

    DTIC Science & Technology

    2009-01-01

    channel coding. 2. RESOURCE ALLOCATION USING CROSS - LAYER OPTIMIZATION This work considers a wireless visual sensor network that...SUBJECT TERMS Cross - layer , visual sensor network , Code Division Multiple Access (CDMA), resource allocation, H.265, spread spectrum, joint source- channel ...DATES COVERED (From - To) January 2008 – August 2008 4. TITLE AND SUBTITLE WIRELESS VISUAL SENSOR NETWORK RESOURCE ALLOCATION USING CROSS -

  9. Sensor Location Problem Optimization for Traffic Network with Different Spatial Distributions of Traffic Information

    PubMed Central

    Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian

    2016-01-01

    To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads. PMID:27801794

  10. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook; Fetzer, Eric

    2014-05-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map-reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in a hybrid Cloud (private eucalyptus & public Amazon). Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Multi-year datasets are automatically "sharded" by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will present the architecture of SciReduce, describe the

  11. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  12. Porous biodegradable lumbar interbody fusion cage design and fabrication using integrated global-local topology optimization with laser sintering.

    PubMed

    Kang, Heesuk; Hollister, Scott J; La Marca, Frank; Park, Paul; Lin, Chia-Ying

    2013-10-01

    Biodegradable cages have received increasing attention for their use in spinal procedures involving interbody fusion to resolve complications associated with the use of nondegradable cages, such as stress shielding and long-term foreign body reaction. However, the relatively weak initial material strength compared to permanent materials and subsequent reduction due to degradation may be problematic. To design a porous biodegradable interbody fusion cage for a preclinical large animal study that can withstand physiological loads while possessing sufficient interconnected porosity for bony bridging and fusion, we developed a multiscale topology optimization technique. Topology optimization at the macroscopic scale provides optimal structural layout that ensures mechanical strength, while optimally designed microstructures, which replace the macroscopic material layout, ensure maximum permeability. Optimally designed cages were fabricated using solid, freeform fabrication of poly(ε-caprolactone) mixed with hydroxyapatite. Compression tests revealed that the yield strength of optimized fusion cages was two times that of typical human lumbar spine loads. Computational analysis further confirmed the mechanical integrity within the human lumbar spine, although the pore structure locally underwent higher stress than yield stress. This optimization technique may be utilized to balance the complex requirements of load-bearing, stress shielding, and interconnected porosity when using biodegradable materials for fusion cages.

  13. Development and Application of Non-Linear Image Enhancement and Multi-Sensor Fusion Techniques for Hazy and Dark Imaging

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur

    2005-01-01

    The purpose of this research was to develop enhancement and multi-sensor fusion algorithms and techniques to make it safer for the pilot to fly in what would normally be considered Instrument Flight Rules (IFR) conditions, where pilot visibility is severely restricted due to fog, haze or other weather phenomenon. We proposed to use the non-linear Multiscale Retinex (MSR) as the basic driver for developing an integrated enhancement and fusion engine. When we started this research, the MSR was being applied primarily to grayscale imagery such as medical images, or to three-band color imagery, such as that produced in consumer photography: it was not, however, being applied to other imagery such as that produced by infrared image sources. However, we felt that it was possible by using the MSR algorithm in conjunction with multiple imaging modalities such as long-wave infrared (LWIR), short-wave infrared (SWIR), and visible spectrum (VIS), we could substantially improve over the then state-of-the-art enhancement algorithms, especially in poor visibility conditions. We proposed the following tasks: 1) Investigate the effects of applying the MSR to LWIR and SWIR images. This consisted of optimizing the algorithm in terms of surround scales, and weights for these spectral bands; 2) Fusing the LWIR and SWIR images with the VIS images using the MSR framework to determine the best possible representation of the desired features; 3) Evaluating different mixes of LWIR, SWIR and VIS bands for maximum fog and haze reduction, and low light level compensation; 4) Modifying the existing algorithms to work with video sequences. Over the course of the 3 year research period, we were able to accomplish these tasks and report on them at various internal presentations at NASA Langley Research Center, and in presentations and publications elsewhere. A description of the work performed under the tasks is provided in Section 2. The complete list of relevant publications during the research

  14. Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis

    SciTech Connect

    Jesneck, Jonathan L.; Nolte, Loren W.; Baker, Jay A.; Floyd, Carey E.; Lo, Joseph Y.

    2006-08-15

    As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p<0.02) and achieved AUC=0.85{+-}0.01. The DF-P surpassed the other classifiers in terms of pAUC (p<0.01) and reached pAUC=0.38{+-}0.02. For the mass data set, DF-A outperformed both the ANN and the LDA (p<0.04) and achieved AUC=0.94{+-}0.01. Although for this data set there were no statistically significant differences among the classifiers' pAUC values (pAUC=0.57{+-}0.07 to 0.67{+-}0.05, p>0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p<0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets.

  15. Swarm intelligence algorithms for integrated optimization of piezoelectric actuator and sensor placement and feedback gains

    NASA Astrophysics Data System (ADS)

    Dutta, Rajdeep; Ganguli, Ranjan; Mani, V.

    2011-10-01

    Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures.

  16. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  17. Improved Maturity and Ripeness Classifications of Magnifera Indica cv. Harumanis Mangoes through Sensor Fusion of an Electronic Nose and Acoustic Sensor

    PubMed Central

    Zakaria, Ammar; Shakaff, Ali Yeon Md; Masnan, Maz Jamilah; Saad, Fathinul Syahir Ahmad; Adom, Abdul Hamid; Ahmad, Mohd Noor; Jaafar, Mahmad Nor; Abdullah, Abu Hassan; Kamarudin, Latifah Munirah

    2012-01-01

    In recent years, there have been a number of reported studies on the use of non-destructive techniques to evaluate and determine mango maturity and ripeness levels. However, most of these reported works were conducted using single-modality sensing systems, either using an electronic nose, acoustics or other non-destructive measurements. This paper presents the work on the classification of mangoes (Magnifera Indica cv. Harumanis) maturity and ripeness levels using fusion of the data of an electronic nose and an acoustic sensor. Three groups of samples each from two different harvesting times (week 7 and week 8) were evaluated by the e-nose and then followed by the acoustic sensor. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were able to discriminate the mango harvested at week 7 and week 8 based solely on the aroma and volatile gases released from the mangoes. However, when six different groups of different maturity and ripeness levels were combined in one classification analysis, both PCA and LDA were unable to discriminate the age difference of the Harumanis mangoes. Instead of six different groups, only four were observed using the LDA, while PCA showed only two distinct groups. By applying a low level data fusion technique on the e-nose and acoustic data, the classification for maturity and ripeness levels using LDA was improved. However, no significant improvement was observed using PCA with data fusion technique. Further work using a hybrid LDA-Competitive Learning Neural Network was performed to validate the fusion technique and classify the samples. It was found that the LDA-CLNN was also improved significantly when data fusion was applied. PMID:22778629

  18. Optimal sensor placement for multi-setup modal analysis of structures

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Maes, Kristof; De Roeck, Guido; Reynders, Edwin; Papadimitriou, Costas; Lombaert, Geert

    2017-08-01

    Modal tests on large structures are often performed in multiple setups for practical reasons. Several sensors are kept fixed as reference sensors over all setups, while the other, so called roving sensors, are moved from one setup to another. This paper develops an optimal sensor placement strategy for multi-setup modal identification, which simultaneously optimizes the locations of the reference sensors and roving sensors. As an optimality criterion, the Information Entropy is adopted, which is a scalar measure of uncertainty in the Bayesian framework. The focus in the application goes to repetitive structures where modes typically occur in clusters, with closely spaced natural frequencies and similar wavelengths. The proposed strategy is illustrated for selecting optimal positions of uni-axial sensors for a repetitive frame structure. The influence of the number of reference sensors and two strategies for positioning roving sensors, i.e. a cluster and a uniform distribution of roving sensors, are investigated. The number of reference sensors is found to be preferably equal to or larger than the number of modes to be identified. In this case, the information content, as quantified by the Information Entropy, is not very sensitive to the roving sensor strategy. If less reference sensors are used, it is highly preferred to distribute the roving sensors uniformly over the structure instead of clustering them. The proposed strategy has been validated by an experimental modal test on a floor of an office building of KU Leuven, which has a nearly repetitive structural layout. The results show how optimally locating sensors allows extracting more information from the data. Though the focus is on applications involving repetitive structures, the proposed strategy can be applied to multi-setup modal identification of any large structure.

  19. Fusion of Optimized Indicators from Advanced Driver Assistance Systems (ADAS) for Driver Drowsiness Detection

    PubMed Central

    Daza, Iván G.; Bergasa, Luis M.; Bronte, Sebastián; Yebes, J. Javier; Almazán, Javier; Arroyo, Roberto

    2014-01-01

    This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study. PMID:24412904

  20. Fusion of optimized indicators from Advanced Driver Assistance Systems (ADAS) for driver drowsiness detection.

    PubMed

    Daza, Iván García; Bergasa, Luis Miguel; Bronte, Sebastián; Yebes, Jose Javier; Almazán, Javier; Arroyo, Roberto

    2014-01-09

    This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.

  1. A Multi-Sensor Data Fusion Approach for Atrial Hypertrophy Disease Diagnosis Based on Characterized Support Vector Hyperspheres.

    PubMed

    Zhu, Yungang; Liu, Dayou; Grosu, Radu; Wang, Xinhua; Duan, Hongying; Wang, Guodong

    2017-09-07

    Disease diagnosis can be performed based on fusing the data acquired by multiple medical sensors from patients, and it is a crucial task in sensor-based e-healthcare systems. However, it is a challenging problem that there are few effective diagnosis methods based on sensor data fusion for atrial hypertrophy disease. In this article, we propose a novel multi-sensor data fusion method for atrial hypertrophy diagnosis, namely, characterized support vector hyperspheres (CSVH). Instead of constructing a hyperplane, as a traditional support vector machine does, the proposed method generates "hyperspheres" to collect the discriminative medical information, since a hypersphere is more powerful for data description than a hyperplane. In detail, CSVH constructs two characterized hyperspheres for the classes of patient and healthy subject, respectively. The hypersphere for the patient class is developed in a weighted version so as to take the diversity of patient instances into consideration. The hypersphere for the class of healthy people keeps furthest away from the patient class in order to achieve maximum separation from the patient class. A query is labelled by membership functions defined based on the two hyperspheres. If the query is rejected by the two classes, the angle information of the query to outliers and overlapping-region data is investigated to provide the final decision. The experimental results illustrate that the proposed method achieves the highest diagnosis accuracy among the state-of-the-art methods.

  2. Novel free-form hohlraum shape design and optimization for laser-driven inertial confinement fusion

    SciTech Connect

    Jiang, Shaoen; Jing, Longfei Ding, Yongkun; Huang, Yunbao

    2014-10-15

    The hohlraum shape attracts considerable attention because there is no successful ignition method for laser-driven inertial confinement fusion at the National Ignition Facility. The available hohlraums are typically designed with simple conic curves, including ellipses, parabolas, arcs, or Lame curves, which allow only a few design parameters for the shape optimization, making it difficult to improve the performance, e.g., the energy coupling efficiency or radiation drive symmetry. A novel free-form hohlraum design and optimization approach based on the non-uniform rational basis spline (NURBS) model is proposed. In the present study, (1) all kinds of hohlraum shapes can be uniformly represented using NURBS, which is greatly beneficial for obtaining the optimal available hohlraum shapes, and (2) such free-form uniform representation enables us to obtain an optimal shape over a large design domain for the hohlraum with a more uniform radiation and higher drive temperature of the fuel capsule. Finally, a hohlraum is optimized and evaluated with respect to the drive temperature and symmetry at the Shenguang III laser facility in China. The drive temperature and symmetry results indicate that such a free-form representation is advantageous over available hohlraum shapes because it can substantially expand the shape design domain so as to obtain an optimal hohlraum with high performance.

  3. An Optimized PatchMatch for multi-scale and multi-feature label fusion</