Model-theoretic framework for sensor data fusion
NASA Astrophysics Data System (ADS)
Zavoleas, Kyriakos P.; Kokar, Mieczyslaw M.
1993-09-01
The main goal of our research in sensory data fusion (SDF) is the development of a systematic approach (a methodology) to designing systems for interpreting sensory information and for reasoning about the situation based upon this information and upon available data bases and knowledge bases. To achieve such a goal, two kinds of subgoals have been set: (1) develop a theoretical framework in which rational design/implementation decisions can be made, and (2) design a prototype SDF system along the lines of the framework. Our initial design of the framework has been described in our previous papers. In this paper we concentrate on the model-theoretic aspects of this framework. We postulate that data are embedded in data models, and information processing mechanisms are embedded in model operators. The paper is devoted to analyzing the classes of model operators and their significance in SDF. We investigate transformation abstraction and fusion operators. A prototype SDF system, fusing data from range and intensity sensors, is presented, exemplifying the structures introduced. Our framework is justified by the fact that it provides modularity, traceability of information flow, and a basis for a specification language for SDF.
High Level Information Fusion (HLIF) with nested fusion loops
NASA Astrophysics Data System (ADS)
Woodley, Robert; Gosnell, Michael; Fischer, Amber
2013-05-01
Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information. Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial developments, numerous models of information fusion have emerged, hoping to better capture the human-centric process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling, and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and repurposed data in a cohesive manner. FURNACE supports analyst's efforts to develop situation models, threat models, and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence markets.
Knowledge guided information fusion for segmentation of multiple sclerosis lesions in MRI images
NASA Astrophysics Data System (ADS)
Zhu, Chaozhe; Jiang, Tianzi
2003-05-01
In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.
A Survey and Analysis of Frameworks and Framework Issues for Information Fusion Applications
NASA Astrophysics Data System (ADS)
Llinas, James
This paper was stimulated by the proposed project for the Santander Bank-sponsored "Chairs of Excellence" program in Spain, of which the author is a recipient. That project involves research on characterizing a robust, problem-domain-agnostic framework in which Information Fusion (IF) processes of all description, to include artificial intelligence processes and techniques could be developed. The paper describes the IF process and its requirements, a literature survey on IF frameworks, and a new proposed framework that will be implemented and evaluated at Universidad Carlos III de Madrid, Colmenarejo Campus.
Combinatorial Fusion Analysis for Meta Search Information Retrieval
NASA Astrophysics Data System (ADS)
Hsu, D. Frank; Taksa, Isak
Leading commercial search engines are built as single event systems. In response to a particular search query, the search engine returns a single list of ranked search results. To find more relevant results the user must frequently try several other search engines. A meta search engine was developed to enhance the process of multi-engine querying. The meta search engine queries several engines at the same time and fuses individual engine results into a single search results list. The fusion of multiple search results has been shown (mostly experimentally) to be highly effective. However, the question of why and how the fusion should be done still remains largely unanswered. In this chapter, we utilize the combinatorial fusion analysis proposed by Hsu et al. to analyze combination and fusion of multiple sources of information. A rank/score function is used in the design and analysis of our framework. The framework provides a better understanding of the fusion phenomenon in information retrieval. For example, to improve the performance of the combined multiple scoring systems, it is necessary that each of the individual scoring systems has relatively high performance and the individual scoring systems are diverse. Additionally, we illustrate various applications of the framework using two examples from the information retrieval domain.
A visual analytic framework for data fusion in investigative intelligence
NASA Astrophysics Data System (ADS)
Cai, Guoray; Gross, Geoff; Llinas, James; Hall, David
2014-05-01
Intelligence analysis depends on data fusion systems to provide capabilities of detecting and tracking important objects, events, and their relationships in connection to an analytical situation. However, automated data fusion technologies are not mature enough to offer reliable and trustworthy information for situation awareness. Given the trend of increasing sophistication of data fusion algorithms and loss of transparency in data fusion process, analysts are left out of the data fusion process cycle with little to no control and confidence on the data fusion outcome. Following the recent rethinking of data fusion as human-centered process, this paper proposes a conceptual framework towards developing alternative data fusion architecture. This idea is inspired by the recent advances in our understanding of human cognitive systems, the science of visual analytics, and the latest thinking about human-centered data fusion. Our conceptual framework is supported by an analysis of the limitation of existing fully automated data fusion systems where the effectiveness of important algorithmic decisions depend on the availability of expert knowledge or the knowledge of the analyst's mental state in an investigation. The success of this effort will result in next generation data fusion systems that can be better trusted while maintaining high throughput.
NASA Astrophysics Data System (ADS)
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter.
Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei
2016-11-02
Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system's error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts.
Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter
Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei
2016-01-01
Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system’s error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts. PMID:27827832
The role of data fusion in predictive maintenance using digital twin
NASA Astrophysics Data System (ADS)
Liu, Zheng; Meyendorf, Norbert; Mrad, Nezih
2018-04-01
Modern aerospace industry is migrating from reactive to proactive and predictive maintenance to increase platform operational availability and efficiency, extend its useful life cycle and reduce its life cycle cost. Multiphysics modeling together with data-driven analytics generate a new paradigm called "Digital Twin." The digital twin is actually a living model of the physical asset or system, which continually adapts to operational changes based on the collected online data and information, and can forecast the future of the corresponding physical counterpart. This paper reviews the overall framework to develop a digital twin coupled with the industrial Internet of Things technology to advance aerospace platforms autonomy. Data fusion techniques particularly play a significant role in the digital twin framework. The flow of information from raw data to high-level decision making is propelled by sensor-to-sensor, sensor-to-model, and model-to-model fusion. This paper further discusses and identifies the role of data fusion in the digital twin framework for aircraft predictive maintenance.
A novel framework for command and control of networked sensor systems
NASA Astrophysics Data System (ADS)
Chen, Genshe; Tian, Zhi; Shen, Dan; Blasch, Erik; Pham, Khanh
2007-04-01
In this paper, we have proposed a highly innovative advanced command and control framework for sensor networks used for future Integrated Fire Control (IFC). The primary goal is to enable and enhance target detection, validation, and mitigation for future military operations by graphical game theory and advanced knowledge information fusion infrastructures. The problem is approached by representing distributed sensor and weapon systems as generic warfare resources which must be optimized in order to achieve the operational benefits afforded by enabling a system of systems. This paper addresses the importance of achieving a Network Centric Warfare (NCW) foundation of information superiority-shared, accurate, and timely situational awareness upon which advanced automated management aids for IFC can be built. The approach uses the Data Fusion Information Group (DFIG) Fusion hierarchy of Level 0 through Level 4 to fuse the input data into assessments for the enemy target system threats in a battlespace to which military force is being applied. Compact graph models are employed across all levels of the fusion hierarchy to accomplish integrative data fusion and information flow control, as well as cross-layer sensor management. The functional block at each fusion level will have a set of innovative algorithms that not only exploit the corresponding graph model in a computationally efficient manner, but also permit combined functional experiments across levels by virtue of the unifying graphical model approach.
Value Driven Information Processing and Fusion
2016-03-01
consensus approach allows a decentralized approach to achieve the optimal error exponent of the centralized counterpart, a conclusion that is signifi...SECURITY CLASSIFICATION OF: The objective of the project is to develop a general framework for value driven decentralized information processing...including: optimal data reduction in a network setting for decentralized inference with quantization constraint; interactive fusion that allows queries and
The fusion of large scale classified side-scan sonar image mosaics.
Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan
2006-07-01
This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.
NASA Astrophysics Data System (ADS)
Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang
2017-12-01
In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
A comparison of synthesis and integrative approaches for meaning making and information fusion
NASA Astrophysics Data System (ADS)
Eggleston, Robert G.; Fenstermacher, Laurie
2017-05-01
Traditionally, information fusion approaches to meaning making have been integrative or aggregative in nature, creating meaning "containers" in which to put content (e.g., attributes) about object classes. In a large part, this was due to the limits in technology/tools for supporting information fusion (e.g., computers). A different synthesis based approach for meaning making is described which takes advantage of computing advances. The approach is not focused on the events/behaviors being observed/sensed; instead, it is human work centric. The former director of the Defense Intelligence Agency once wrote, "Context is king. Achieving an understanding of what is happening - or will happen - comes from a truly integrated picture of an area, the situation and the various personalities in it…a layered approach over time that builds depth of understanding."1 The synthesis based meaning making framework enables this understanding. It is holistic (both the sum and the parts, the proverbial forest and the trees), multi-perspective and emulative (as opposed to representational). The two approaches are complementary, with the synthesis based meaning making framework as a wrapper. The integrative approach would be dominant at level 0,1 fusion: data fusion, track formation and the synthesis based meaning making becomes dominant at higher fusion levels (levels 2 and 3), although both may be in play. A synthesis based approach to information fusion is thus well suited for "gray zone" challenges in which there is aggression and ambiguity and which are inherently perspective dependent (e.g., recent events in Ukraine).
Mathematical Fundamentals of Probabilistic Semantics for High-Level Fusion
2013-12-02
understanding of the fundamental aspects of uncertainty representation and reasoning that a theory of hard and soft high-level fusion must encompass...representation and reasoning that a theory of hard and soft high-level fusion must encompass. Successful completion requires an unbiased, in-depth...and soft information is the lack of a fundamental HLIF theory , backed by a consistent mathematical framework and supporting algorithms. Although there
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Parallel Molecular Distributed Detection With Brownian Motion.
Rogers, Uri; Koh, Min-Sung
2016-12-01
This paper explores the in vivo distributed detection of an undesired biological agent's (BAs) biomarkers by a group of biological sized nanomachines in an aqueous medium under drift. The term distributed, indicates that the system information relative to the BAs presence is dispersed across the collection of nanomachines, where each nanomachine possesses limited communication, computation, and movement capabilities. Using Brownian motion with drift, a probabilistic detection and optimal data fusion framework, coined molecular distributed detection, will be introduced that combines theory from both molecular communication and distributed detection. Using the optimal data fusion framework as a guide, simulation indicates that a sub-optimal fusion method exists, allowing for a significant reduction in implementation complexity while retaining BA detection accuracy.
A novel framework of tissue membrane systems for image fusion.
Zhang, Zulin; Yi, Xinzhong; Peng, Hong
2014-01-01
This paper proposes a tissue membrane system-based framework to deal with the optimal image fusion problem. A spatial domain fusion algorithm is given, and a tissue membrane system of multiple cells is used as its computing framework. Based on the multicellular structure and inherent communication mechanism of the tissue membrane system, an improved velocity-position model is developed. The performance of the fusion framework is studied with comparison of several traditional fusion methods as well as genetic algorithm (GA)-based and differential evolution (DE)-based spatial domain fusion methods. Experimental results show that the proposed fusion framework is superior or comparable to the other methods and can be efficiently used for image fusion.
NASA Astrophysics Data System (ADS)
Brandon, R.; Page, S.; Varndell, J.
2012-06-01
This paper presents a novel application of Evidential Reasoning to Threat Assessment for critical infrastructure protection. A fusion algorithm based on the PCR5 Dezert-Smarandache fusion rule is proposed which fuses alerts generated by a vision-based behaviour analysis algorithm and a-priori watch-list intelligence data. The fusion algorithm produces a prioritised event list according to a user-defined set of event-type severity or priority weightings. Results generated from application of the algorithm to real data and Behaviour Analysis alerts captured at London's Heathrow Airport under the EU FP7 SAMURAI programme are presented. A web-based demonstrator system is also described which implements the fusion process in real-time. It is shown that this system significantly reduces the data deluge problem, and directs the user's attention to the most pertinent alerts, enhancing their Situational Awareness (SA). The end-user is also able to alter the perceived importance of different event types in real-time, allowing the system to adapt rapidly to changes in priorities as the situation evolves. One of the key challenges associated with fusing information deriving from intelligence data is the issue of Data Incest. Techniques for handling Data Incest within Evidential Reasoning frameworks are proposed, and comparisons are drawn with respect to Data Incest management techniques that are commonly employed within Bayesian fusion frameworks (e.g. Covariance Intersection). The challenges associated with simultaneously dealing with conflicting information and Data Incest in Evidential Reasoning frameworks are also discussed.
Real-time sensor validation and fusion for distributed autonomous sensors
NASA Astrophysics Data System (ADS)
Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.
2004-04-01
Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.
Tang, Yongchuan; Zhou, Deyun; Chan, Felix T S
2018-06-11
Quantification of uncertain degree in the Dempster-Shafer evidence theory (DST) framework with belief entropy is still an open issue, even a blank field for the open world assumption. Currently, the existed uncertainty measures in the DST framework are limited to the closed world where the frame of discernment (FOD) is assumed to be complete. To address this issue, this paper focuses on extending a belief entropy to the open world by considering the uncertain information represented as the FOD and the nonzero mass function of the empty set simultaneously. An extension to Deng’s entropy in the open world assumption (EDEOW) is proposed as a generalization of the Deng’s entropy and it can be degenerated to the Deng entropy in the closed world wherever necessary. In order to test the reasonability and effectiveness of the extended belief entropy, an EDEOW-based information fusion approach is proposed and applied to sensor data fusion under uncertainty circumstance. The experimental results verify the usefulness and applicability of the extended measure as well as the modified sensor data fusion method. In addition, a few open issues still exist in the current work: the necessary properties for a belief entropy in the open world assumption, whether there exists a belief entropy that satisfies all the existed properties, and what is the most proper fusion frame for sensor data fusion under uncertainty.
Chowdhury, Rasheda Arman; Zerouali, Younes; Hedrich, Tanguy; Heers, Marcel; Kobayashi, Eliane; Lina, Jean-Marc; Grova, Christophe
2015-11-01
The purpose of this study is to develop and quantitatively assess whether fusion of EEG and MEG (MEEG) data within the maximum entropy on the mean (MEM) framework increases the spatial accuracy of source localization, by yielding better recovery of the spatial extent and propagation pathway of the underlying generators of inter-ictal epileptic discharges (IEDs). The key element in this study is the integration of the complementary information from EEG and MEG data within the MEM framework. MEEG was compared with EEG and MEG when localizing single transient IEDs. The fusion approach was evaluated using realistic simulation models involving one or two spatially extended sources mimicking propagation patterns of IEDs. We also assessed the impact of the number of EEG electrodes required for an efficient EEG-MEG fusion. MEM was compared with minimum norm estimate, dynamic statistical parametric mapping, and standardized low-resolution electromagnetic tomography. The fusion approach was finally assessed on real epileptic data recorded from two patients showing IEDs simultaneously in EEG and MEG. Overall the localization of MEEG data using MEM provided better recovery of the source spatial extent, more sensitivity to the source depth and more accurate detection of the onset and propagation of IEDs than EEG or MEG alone. MEM was more accurate than the other methods. MEEG proved more robust than EEG and MEG for single IED localization in low signal-to-noise ratio conditions. We also showed that only few EEG electrodes are required to bring additional relevant information to MEG during MEM fusion.
Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.
Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias
2017-11-27
Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.
Fusion or confusion: knowledge or nonsense?
NASA Astrophysics Data System (ADS)
Rothman, Peter L.; Denton, Richard V.
1991-08-01
The terms 'data fusion,' 'sensor fusion,' multi-sensor integration,' and 'multi-source integration' have been used widely in the technical literature to refer to a variety of techniques, technologies, systems, and applications which employ and/or combine data derived from multiple information sources. Applications of data fusion range from real-time fusion of sensor information for the navigation of mobile robots to the off-line fusion of both human and technical strategic intelligence data. The Department of Defense Critical Technologies Plan lists data fusion in the highest priority group of critical technologies, but just what is data fusion? The DoD Critical Technologies Plan states that data fusion involves 'the acquisition, integration, filtering, correlation, and synthesis of useful data from diverse sources for the purposes of situation/environment assessment, planning, detecting, verifying, diagnosing problems, aiding tactical and strategic decisions, and improving system performance and utility.' More simply states, sensor fusion refers to the combination of data from multiple sources to provide enhanced information quality and availability over that which is available from any individual source alone. This paper presents a survey of the state-of-the- art in data fusion technologies, system components, and applications. A set of characteristics which can be utilized to classify data fusion systems is presented. Additionally, a unifying mathematical and conceptual framework within which to understand and organize fusion technologies is described. A discussion of often overlooked issues in the development of sensor fusion systems is also presented.
Generalized information fusion and visualization using spatial voting and data modeling
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.
2013-05-01
We present a novel and innovative information fusion and visualization framework for multi-source intelligence (multiINT) data using Spatial Voting (SV) and Data Modeling. We describe how different sources of information can be converted into numerical form for further processing downstream, followed by a short description of how this information can be fused using the SV grid. As an illustrative example, we show the modeling of cyberspace as cyber layers for the purpose of tracking cyber personas. Finally we describe a path ahead for creating interactive agile networks through defender customized Cyber-cubes for network configuration and attack visualization.
INFORM Lab: a testbed for high-level information fusion and resource management
NASA Astrophysics Data System (ADS)
Valin, Pierre; Guitouni, Adel; Bossé, Eloi; Wehn, Hans; Happe, Jens
2011-05-01
DRDC Valcartier and MDA have created an advanced simulation testbed for the purpose of evaluating the effectiveness of Network Enabled Operations in a Coastal Wide Area Surveillance situation, with algorithms provided by several universities. This INFORM Lab testbed allows experimenting with high-level distributed information fusion, dynamic resource management and configuration management, given multiple constraints on the resources and their communications networks. This paper describes the architecture of INFORM Lab, the essential concepts of goals and situation evidence, a selected set of algorithms for distributed information fusion and dynamic resource management, as well as auto-configurable information fusion architectures. The testbed provides general services which include a multilayer plug-and-play architecture, and a general multi-agent framework based on John Boyd's OODA loop. The testbed's performance is demonstrated on 2 types of scenarios/vignettes for 1) cooperative search-and-rescue efforts, and 2) a noncooperative smuggling scenario involving many target ships and various methods of deceit. For each mission, an appropriate subset of Canadian airborne and naval platforms are dispatched to collect situation evidence, which is fused, and then used to modify the platform trajectories for the most efficient collection of further situation evidence. These platforms are fusion nodes which obey a Command and Control node hierarchy.
Adaptive Sensing and Fusion of Multi-Sensor Data and Historical Information
2009-11-06
integrate MTL and semi-supervised learning into a single framework , thereby exploiting two forms of contextual information. A key new objective of the...this report we integrate MTL and semi-supervised learning into a single framework , thereby exploiting two forms of contextual information. A key new...process [8], denoted as X ∼ BeP (B), where B is a measure on Ω. If B is continuous, X is a Poisson process with intensity B and can be constructed as X = N
Advances in data representation for hard/soft information fusion
NASA Astrophysics Data System (ADS)
Rimland, Jeffrey C.; Coughlin, Dan; Hall, David L.; Graham, Jacob L.
2012-06-01
Information fusion is becoming increasingly human-centric. While past systems typically relegated humans to the role of analyzing a finished fusion product, current systems are exploring the role of humans as integral elements in a modular and extensible distributed framework where many tasks can be accomplished by either human or machine performers. For example, "participatory sensing" campaigns give humans the role of "soft sensors" by uploading their direct observations or as "soft sensor platforms" by using mobile devices to record human-annotated, GPS-encoded high quality photographs, video, or audio. Additionally, the role of "human-in-the-loop", in which individuals or teams using advanced human computer interface (HCI) tools such as stereoscopic 3D visualization, haptic interfaces, or aural "sonification" interfaces can help to effectively engage the innate human capability to perform pattern matching, anomaly identification, and semantic-based contextual reasoning to interpret an evolving situation. The Pennsylvania State University is participating in a Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office to investigate fusion of hard and soft data in counterinsurgency (COIN) situations. In addition to the importance of this research for Intelligence Preparation of the Battlefield (IPB), many of the same challenges and techniques apply to health and medical informatics, crisis management, crowd-sourced "citizen science", and monitoring environmental concerns. One of the key challenges that we have encountered is the development of data formats, protocols, and methodologies to establish an information architecture and framework for the effective capture, representation, transmission, and storage of the vastly heterogeneous data and accompanying metadata -- including capabilities and characteristics of human observers, uncertainty of human observations, "soft" contextual data, and information pedigree. This paper describes our findings and offers insights into the role of data representation in hard/soft fusion.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Advanced algorithms for distributed fusion
NASA Astrophysics Data System (ADS)
Gelfand, A.; Smith, C.; Colony, M.; Bowman, C.; Pei, R.; Huynh, T.; Brown, C.
2008-03-01
The US Military has been undergoing a radical transition from a traditional "platform-centric" force to one capable of performing in a "Network-Centric" environment. This transformation will place all of the data needed to efficiently meet tactical and strategic goals at the warfighter's fingertips. With access to this information, the challenge of fusing data from across the batttlespace into an operational picture for real-time Situational Awareness emerges. In such an environment, centralized fusion approaches will have limited application due to the constraints of real-time communications networks and computational resources. To overcome these limitations, we are developing a formalized architecture for fusion and track adjudication that allows the distribution of fusion processes over a dynamically created and managed information network. This network will support the incorporation and utilization of low level tracking information within the Army Distributed Common Ground System (DCGS-A) or Future Combat System (FCS). The framework is based on Bowman's Dual Node Network (DNN) architecture that utilizes a distributed network of interlaced fusion and track adjudication nodes to build and maintain a globally consistent picture across all assets.
2007-11-01
Florea, Anne-Laure Jousselme, Éloi Bossé ; DRDC Valcartier TR 2003-319 ; R & D pour la défense Canada – Valcartier ; novembre 2007. Contexte : Pour...12 3.3.2 Imprecise information . . . . . . . . . . . . . . . . . . . . . 13 3.3.3 Uncertain and imprecise information...information proposed by Philippe Smets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Figure 5: The process of information modelling
Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang
2017-08-28
Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.
Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem
NASA Astrophysics Data System (ADS)
Zhang, Caiyun
2015-06-01
Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Bayesian Knowledge Fusion in Prognostics and Health Management—A Case Study
NASA Astrophysics Data System (ADS)
Rabiei, Masoud; Modarres, Mohammad; Mohammad-Djafari, Ali
2011-03-01
In the past few years, a research effort has been in progress at University of Maryland to develop a Bayesian framework based on Physics of Failure (PoF) for risk assessment and fleet management of aging airframes. Despite significant achievements in modelling of crack growth behavior using fracture mechanics, it is still of great interest to find practical techniques for monitoring the crack growth instances using nondestructive inspection and to integrate such inspection results with the fracture mechanics models to improve the predictions. The ultimate goal of this effort is to develop an integrated probabilistic framework for utilizing all of the available information to come up with enhanced (less uncertain) predictions for structural health of the aircraft in future missions. Such information includes material level fatigue models and test data, health monitoring measurements and inspection field data. In this paper, a case study of using Bayesian fusion technique for integrating information from multiple sources in a structural health management problem is presented.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
Heterogeneous data fusion for brain tumor classification.
Metsis, Vangelis; Huang, Heng; Andronesi, Ovidiu C; Makedon, Fillia; Tzika, Aria
2012-10-01
Current research in biomedical informatics involves analysis of multiple heterogeneous data sets. This includes patient demographics, clinical and pathology data, treatment history, patient outcomes as well as gene expression, DNA sequences and other information sources such as gene ontology. Analysis of these data sets could lead to better disease diagnosis, prognosis, treatment and drug discovery. In this report, we present a novel machine learning framework for brain tumor classification based on heterogeneous data fusion of metabolic and molecular datasets, including state-of-the-art high-resolution magic angle spinning (HRMAS) proton (1H) magnetic resonance spectroscopy and gene transcriptome profiling, obtained from intact brain tumor biopsies. Our experimental results show that our novel framework outperforms any analysis using individual dataset.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
NASA Astrophysics Data System (ADS)
Lefebvre, Eric; Helleur, Christopher; Kashyap, Nathan
2008-03-01
Maritime surveillance of coastal regions requires operational staff to integrate a large amount of information from a variety of military and civilian sources. The diverse nature of the information sources makes complete automation difficult. The volume of vessels tracked and the number of sources makes it difficult for the limited operation centre staff to fuse all the information manually within a reasonable timeframe. In this paper, a conceptual decision space is proposed to provide a framework for automating the process of operators integrating the sources needed to maintain Maritime Domain Awareness. The decision space contains all potential pairs of ship tracks that are candidates for fusion. The location of the candidate pairs in this defined space depends on the value of the parameters used to make a decision. In the application presented, three independent parameters are used: the source detection efficiency, the geo-feasibility, and the track quality. One of three decisions is applied to each candidate track pair based on these three parameters: 1. to accept the fusion, in which case tracks are fused in one track, 2. to reject the fusion, in which case the candidate track pair is removed from the list of potential fusion, and 3. to defer the fusion, in which case no fusion occurs but the candidate track pair remains in the list of potential fusion until sufficient information is provided. This paper demonstrates in an operational setting how a proposed conceptual space is used to optimize the different thresholds for automatic fusion decision while minimizing the list of unresolved cases when the decision is left to the operator.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
Yang, Jie; Yin, Yingying; Zhang, Zuping; Long, Jun; Dong, Jian; Zhang, Yuqun; Xu, Zhi; Li, Lei; Liu, Jie; Yuan, Yonggui
2018-02-05
Major depressive disorder (MDD) is characterized by dysregulation of distributed structural and functional networks. It is now recognized that structural and functional networks are related at multiple temporal scales. The recent emergence of multimodal fusion methods has made it possible to comprehensively and systematically investigate brain networks and thereby provide essential information for influencing disease diagnosis and prognosis. However, such investigations are hampered by the inconsistent dimensionality features between structural and functional networks. Thus, a semi-multimodal fusion hierarchical feature reduction framework is proposed. Feature reduction is a vital procedure in classification that can be used to eliminate irrelevant and redundant information and thereby improve the accuracy of disease diagnosis. Our proposed framework primarily consists of two steps. The first step considers the connection distances in both structural and functional networks between MDD and healthy control (HC) groups. By adding a constraint based on sparsity regularization, the second step fully utilizes the inter-relationship between the two modalities. However, in contrast to conventional multi-modality multi-task methods, the structural networks were considered to play only a subsidiary role in feature reduction and were not included in the following classification. The proposed method achieved a classification accuracy, specificity, sensitivity, and area under the curve of 84.91%, 88.6%, 81.29%, and 0.91, respectively. Moreover, the frontal-limbic system contributed the most to disease diagnosis. Importantly, by taking full advantage of the complementary information from multimodal neuroimaging data, the selected consensus connections may be highly reliable biomarkers of MDD. Copyright © 2017 Elsevier B.V. All rights reserved.
mlCAF: Multi-Level Cross-Domain Semantic Context Fusioning for Behavior Identification.
Razzaq, Muhammad Asif; Villalonga, Claudia; Lee, Sungyoung; Akhtar, Usman; Ali, Maqbool; Kim, Eun-Soo; Khattak, Asad Masood; Seung, Hyonwoo; Hur, Taeho; Bang, Jaehun; Kim, Dohyeong; Ali Khan, Wajahat
2017-10-24
The emerging research on automatic identification of user's contexts from the cross-domain environment in ubiquitous and pervasive computing systems has proved to be successful. Monitoring the diversified user's contexts and behaviors can help in controlling lifestyle associated to chronic diseases using context-aware applications. However, availability of cross-domain heterogeneous contexts provides a challenging opportunity for their fusion to obtain abstract information for further analysis. This work demonstrates extension of our previous work from a single domain (i.e., physical activity) to multiple domains (physical activity, nutrition and clinical) for context-awareness. We propose multi-level Context-aware Framework (mlCAF), which fuses the multi-level cross-domain contexts in order to arbitrate richer behavioral contexts. This work explicitly focuses on key challenges linked to multi-level context modeling, reasoning and fusioning based on the mlCAF open-source ontology. More specifically, it addresses the interpretation of contexts from three different domains, their fusioning conforming to richer contextual information. This paper contributes in terms of ontology evolution with additional domains, context definitions, rules and inclusion of semantic queries. For the framework evaluation, multi-level cross-domain contexts collected from 20 users were used to ascertain abstract contexts, which served as basis for behavior modeling and lifestyle identification. The experimental results indicate a context recognition average accuracy of around 92.65% for the collected cross-domain contexts.
mlCAF: Multi-Level Cross-Domain Semantic Context Fusioning for Behavior Identification
Villalonga, Claudia; Lee, Sungyoung; Akhtar, Usman; Ali, Maqbool; Kim, Eun-Soo; Khattak, Asad Masood; Seung, Hyonwoo; Hur, Taeho; Kim, Dohyeong; Ali Khan, Wajahat
2017-01-01
The emerging research on automatic identification of user’s contexts from the cross-domain environment in ubiquitous and pervasive computing systems has proved to be successful. Monitoring the diversified user’s contexts and behaviors can help in controlling lifestyle associated to chronic diseases using context-aware applications. However, availability of cross-domain heterogeneous contexts provides a challenging opportunity for their fusion to obtain abstract information for further analysis. This work demonstrates extension of our previous work from a single domain (i.e., physical activity) to multiple domains (physical activity, nutrition and clinical) for context-awareness. We propose multi-level Context-aware Framework (mlCAF), which fuses the multi-level cross-domain contexts in order to arbitrate richer behavioral contexts. This work explicitly focuses on key challenges linked to multi-level context modeling, reasoning and fusioning based on the mlCAF open-source ontology. More specifically, it addresses the interpretation of contexts from three different domains, their fusioning conforming to richer contextual information. This paper contributes in terms of ontology evolution with additional domains, context definitions, rules and inclusion of semantic queries. For the framework evaluation, multi-level cross-domain contexts collected from 20 users were used to ascertain abstract contexts, which served as basis for behavior modeling and lifestyle identification. The experimental results indicate a context recognition average accuracy of around 92.65% for the collected cross-domain contexts. PMID:29064459
The Intelligent Technologies of Electronic Information System
NASA Astrophysics Data System (ADS)
Li, Xianyu
2017-08-01
Based upon the synopsis of system intelligence and information services, this paper puts forward the attributes and the logic structure of information service, sets forth intelligent technology framework of electronic information system, and presents a series of measures, such as optimizing business information flow, advancing data decision capability, improving information fusion precision, strengthening deep learning application and enhancing prognostic and health management, and demonstrates system operation effectiveness. This will benefit the enhancement of system intelligence.
Fault diagnosis model for power transformers based on information fusion
NASA Astrophysics Data System (ADS)
Dong, Ming; Yan, Zhang; Yang, Li; Judd, Martin D.
2005-07-01
Methods used to assess the insulation status of power transformers before they deteriorate to a critical state include dissolved gas analysis (DGA), partial discharge (PD) detection and transfer function techniques, etc. All of these approaches require experience in order to correctly interpret the observations. Artificial intelligence (AI) is increasingly used to improve interpretation of the individual datasets. However, a satisfactory diagnosis may not be obtained if only one technique is used. For example, the exact location of PD cannot be predicted if only DGA is performed. However, using diverse methods may result in different diagnosis solutions, a problem that is addressed in this paper through the introduction of a fuzzy information infusion model. An inference scheme is proposed that yields consistent conclusions and manages the inherent uncertainty in the various methods. With the aid of information fusion, a framework is established that allows different diagnostic tools to be combined in a systematic way. The application of information fusion technique for insulation diagnostics of transformers is proved promising by means of examples.
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
Statistical label fusion with hierarchical performance models
Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.
2014-01-01
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809
NASA Astrophysics Data System (ADS)
Benaskeur, Abder R.; Roy, Jean
2001-08-01
Sensor Management (SM) has to do with how to best manage, coordinate and organize the use of sensing resources in a manner that synergistically improves the process of data fusion. Based on the contextual information, SM develops options for collecting further information, allocates and directs the sensors towards the achievement of the mission goals and/or tunes the parameters for the realtime improvement of the effectiveness of the sensing process. Conscious of the important role that SM has to play in modern data fusion systems, we are currently studying advanced SM Concepts that would help increase the survivability of the current Halifax and Iroquois Class ships, as well as their possible future upgrades. For this purpose, a hierarchical scheme has been proposed for data fusion and resource management adaptation, based on the control theory and within the process refinement paradigm of the JDL data fusion model, and taking into account the multi-agent model put forward by the SASS Group for the situation analysis process. The novelty of this work lies in the unified framework that has been defined for tackling the adaptation of both the fusion process and the sensor/weapon management.
Earth Science Data Fusion with Event Building Approach
NASA Technical Reports Server (NTRS)
Lukashin, C.; Bartle, Ar.; Callaway, E.; Gyurjyan, V.; Mancilla, S.; Oyarzun, R.; Vakhnin, A.
2015-01-01
Objectives of the NASA Information And Data System (NAIADS) project are to develop a prototype of a conceptually new middleware framework to modernize and significantly improve efficiency of the Earth Science data fusion, big data processing and analytics. The key components of the NAIADS include: Service Oriented Architecture (SOA) multi-lingual framework, multi-sensor coincident data Predictor, fast into-memory data Staging, multi-sensor data-Event Builder, complete data-Event streaming (a work flow with minimized IO), on-line data processing control and analytics services. The NAIADS project is leveraging CLARA framework, developed in Jefferson Lab, and integrated with the ZeroMQ messaging library. The science services are prototyped and incorporated into the system. Merging the SCIAMACHY Level-1 observations and MODIS/Terra Level-2 (Clouds and Aerosols) data products, and ECMWF re- analysis will be used for NAIADS demonstration and performance tests in compute Cloud and Cluster environments.
A multibiometric face recognition fusion framework with template protection
NASA Astrophysics Data System (ADS)
Chindaro, S.; Deravi, F.; Zhou, Z.; Ng, M. W. R.; Castro Neves, M.; Zhou, X.; Kelkboom, E.
2010-04-01
In this work we present a multibiometric face recognition framework based on combining information from 2D with 3D facial features. The 3D biometrics channel is protected by a privacy enhancing technology, which uses error correcting codes and cryptographic primitives to safeguard the privacy of the users of the biometric system at the same time enabling accurate matching through fusion with 2D. Experiments are conducted to compare the matching performance of such multibiometric systems with the individual biometric channels working alone and with unprotected multibiometric systems. The results show that the proposed hybrid system incorporating template protection, match and in some cases exceed the performance of corresponding unprotected equivalents, in addition to offering the additional privacy protection.
A Framework of Covariance Projection on Constraint Manifold for Data Fusion.
Bakr, Muhammad Abu; Lee, Sukhan
2018-05-17
A general framework of data fusion is presented based on projecting the probability distribution of true states and measurements around the predicted states and actual measurements onto the constraint manifold. The constraint manifold represents the constraints to be satisfied among true states and measurements, which is defined in the extended space with all the redundant sources of data such as state predictions and measurements considered as independent variables. By the general framework, we mean that it is able to fuse any correlated data sources while directly incorporating constraints and identifying inconsistent data without any prior information. The proposed method, referred to here as the Covariance Projection (CP) method, provides an unbiased and optimal solution in the sense of minimum mean square error (MMSE), if the projection is based on the minimum weighted distance on the constraint manifold. The proposed method not only offers a generalization of the conventional formula for handling constraints and data inconsistency, but also provides a new insight into data fusion in terms of a geometric-algebraic point of view. Simulation results are provided to show the effectiveness of the proposed method in handling constraints and data inconsistency.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
A data fusion framework for meta-evaluation of intelligent transportation system effectiveness
DOT National Transportation Integrated Search
This study presents a framework for the meta-evaluation of Intelligent Transportation System effectiveness. The framework is based on data fusion approaches that adjust for data biases and violations of other standard statistical assumptions. Operati...
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Hierarchical patch-based co-registration of differently stained histopathology slides
NASA Astrophysics Data System (ADS)
Yigitsoy, Mehmet; Schmidt, Günter
2017-03-01
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
Deep learning decision fusion for the classification of urban remote sensing data
NASA Astrophysics Data System (ADS)
Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter
2018-01-01
Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.
Operational data fusion framework for building frequent Landsat-like imagery in a cloudy region
USDA-ARS?s Scientific Manuscript database
An operational data fusion framework is built to generate dense time-series Landsat-like images for a cloudy region by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) data products and Landsat imagery. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) is integrated in ...
Measuring situational awareness and resolving inherent high-level fusion obstacles
NASA Astrophysics Data System (ADS)
Sudit, Moises; Stotz, Adam; Holender, Michael; Tagliaferri, William; Canarelli, Kathie
2006-04-01
Information Fusion Engine for Real-time Decision Making (INFERD) is a tool that was developed to supplement current graph matching techniques in Information Fusion models. Based on sensory data and a priori models, INFERD dynamically generates, evolves, and evaluates hypothesis on the current state of the environment. The a priori models developed are hierarchical in nature lending them to a multi-level Information Fusion process whose primary output provides a situational awareness of the environment of interest in the context of the models running. In this paper we look at INFERD's multi-level fusion approach and provide insight on the inherent problems such as fragmentation in the approach and the research being undertaken to mitigate those deficiencies. Due to the large variance of data in disparate environments, the awareness of situations in those environments can be drastically different. To accommodate this, the INFERD framework provides support for plug-and-play fusion modules which can be developed specifically for domains of interest. However, because the models running in INFERD are graph based, some default measurements can be provided and will be discussed in the paper. Among these are a Depth measurement to determine how much danger is presented by the action taking place, a Breadth measurement to gain information regarding the scale of an attack that is currently happening, and finally a Reliability measure to tell the user the credibility of a particular hypothesis. All of these results will be demonstrated in the Cyber domain where recent research has shown to be an area that is welldefined and bounded, so that new models and algorithms can be developed and evaluated.
NASA Astrophysics Data System (ADS)
Sabeur, Zoheir; Middleton, Stuart; Veres, Galina; Zlatev, Zlatko; Salvo, Nicola
2010-05-01
The advancement of smart sensor technology in the last few years has led to an increase in the deployment of affordable sensors for monitoring the environment around Europe. This is generating large amounts of sensor observation information and inevitably leading to problems about how to manage large volumes of data as well as making sense out the data for decision-making. In addition, the various European Directives (Water Framework Diectives, Bathing Water Directives, Habitat Directives, etc.. ) which regulate human activities in the environment and the INSPIRE Directive on spatial information management regulations have implicitely led the designated European Member States environment agencies and authorities to put in place new sensor monitoring infrastructure and share information about environmental regions under their statutory responsibilities. They will need to work cross border and collectively reach environmental quality standards. They will also need to regularly report to the EC on the quality of the environments of which they are responsible and make such information accessible to the members of the public. In recent years, early pioneering work on the design of service oriented architecture using sensor networks has been achieved. Information web-services infrastructure using existing data catalogues and web-GIS map services can now be enriched with the deployment of new sensor observation and data fusion and modelling services using OGC standards. The deployment of the new services which describe sensor observations and intelligent data-processing using data fusion techniques can now be implemented and provide added value information with spatial-temporal uncertainties to the next generation of decision support service systems. The new decision support service systems have become key to implement across Europe in order to comply with EU environmental regulations and INSPIRE. In this paper, data fusion services using OGC standards with sensor observation data streams are described in context of a geo-distributed service infrastructure specialising in multiple environmental risk management and decision-support. The sensor data fusion services are deployed and validated in two use cases. These are respectively concerned with: 1) Microbial risks forecast in bathing waters; and 2) Geohazards in urban zones during underground tunneling activities. This research was initiated in the SANY Integrated Project(www.sany-ip.org) and funded by the European Commission under the 6th Framework Programme.
Loose fusion based on SLAM and IMU for indoor environment
NASA Astrophysics Data System (ADS)
Zhu, Haijiang; Wang, Zhicheng; Zhou, Jinglin; Wang, Xuejing
2018-04-01
The simultaneous localization and mapping (SLAM) method based on the RGB-D sensor is widely researched in recent years. However, the accuracy of the RGB-D SLAM relies heavily on correspondence feature points, and the position would be lost in case of scenes with sparse textures. Therefore, plenty of fusion methods using the RGB-D information and inertial measurement unit (IMU) data have investigated to improve the accuracy of SLAM system. However, these fusion methods usually do not take into account the size of matched feature points. The pose estimation calculated by RGB-D information may not be accurate while the number of correct matches is too few. Thus, considering the impact of matches in SLAM system and the problem of missing position in scenes with few textures, a loose fusion method combining RGB-D with IMU is proposed in this paper. In the proposed method, we design a loose fusion strategy based on the RGB-D camera information and IMU data, which is to utilize the IMU data for position estimation when the corresponding point matches are quite few. While there are a lot of matches, the RGB-D information is still used to estimate position. The final pose would be optimized by General Graph Optimization (g2o) framework to reduce error. The experimental results show that the proposed method is better than the RGB-D camera's method. And this method can continue working stably for indoor environment with sparse textures in the SLAM system.
A methodology for hard/soft information fusion in the condition monitoring of aircraft
NASA Astrophysics Data System (ADS)
Bernardo, Joseph T.
2013-05-01
Condition-based maintenance (CBM) refers to the philosophy of performing maintenance when the need arises, based upon indicators of deterioration in the condition of the machinery. Traditionally, CBM involves equipping machinery with electronic sensors that continuously monitor components and collect data for analysis. The addition of the multisensory capability of human cognitive functions (i.e., sensemaking, problem detection, planning, adaptation, coordination, naturalistic decision making) to traditional CBM may create a fuller picture of machinery condition. Cognitive systems engineering techniques provide an opportunity to utilize a dynamic resource—people acting as soft sensors. The literature is extensive on techniques to fuse data from electronic sensors, but little work exists on fusing data from humans with that from electronic sensors (i.e., hard/soft fusion). The purpose of my research is to explore, observe, investigate, analyze, and evaluate the fusion of pilot and maintainer knowledge, experiences, and sensory perceptions with digital maintenance resources. Hard/soft information fusion has the potential to increase problem detection capability, improve flight safety, and increase mission readiness. This proposed project consists the creation of a methodology that is based upon the Living Laboratories framework, a research methodology that is built upon cognitive engineering principles1. This study performs a critical assessment of concept, which will support development of activities to demonstrate hard/soft information fusion in operationally relevant scenarios of aircraft maintenance. It consists of fieldwork, knowledge elicitation to inform a simulation and a prototype.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
Causes of Effects and Effects of Causes
ERIC Educational Resources Information Center
Pearl, Judea
2015-01-01
This article summarizes a conceptual framework and simple mathematical methods of estimating the probability that one event was a necessary cause of another, as interpreted by lawmakers. We show that the fusion of observational and experimental data can yield informative bounds that, under certain circumstances, meet legal criteria of causation.…
NASA Astrophysics Data System (ADS)
Nurmaini, Siti; Firsandaya Malik, Reza; Stiawan, Deris; Firdaus; Saparudin; Tutuko, Bambang
2017-04-01
The information framework aims to holistically address the problems and issues posed by unwanted peat and land fires within the context of the natural environment and socio-economic systems. Informed decisions on planning and allocation of resources can only be made by understanding the landscape. Therefore, information on fire history and air quality impacts must be collected for future analysis. This paper proposes strategic framework based on technology approach with data fusion strategy to produce the data analysis about peat land fires and air quality management in in South Sumatera. The research framework should use the knowledge, experience and data from the previous fire seasons to review, improve and refine the strategies and monitor their effectiveness for the next fire season. Communicating effectively with communities and the public and private sectors in remote and rural landscapes is important, by using smartphones and mobile applications. Tools such as one-stop information based on web applications, to obtain information such as early warning to send and receive fire alerts, could be developed and promoted so that all stakeholders can share important information with each other.
Proposed evaluation framework for assessing operator performance with multisensor displays
NASA Technical Reports Server (NTRS)
Foyle, David C.
1992-01-01
Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.
A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction
NASA Astrophysics Data System (ADS)
Rajput, Asif; Funk, Eugen; Börner, Anko; Hellwich, Olaf
2018-07-01
Modern computational resources combined with low-cost depth sensing systems have enabled mobile robots to reconstruct 3D models of surrounding environments in real-time. Unfortunately, low-cost depth sensors are prone to produce undesirable estimation noise in depth measurements which result in either depth outliers or introduce surface deformations in the reconstructed model. Conventional 3D fusion frameworks integrate multiple error-prone depth measurements over time to reduce noise effects, therefore additional constraints such as steady sensor movement and high frame-rates are required for high quality 3D models. In this paper we propose a generic 3D fusion framework with controlled regularization parameter which inherently reduces noise at the time of data fusion. This allows the proposed framework to generate high quality 3D models without enforcing additional constraints. Evaluation of the reconstructed 3D models shows that the proposed framework outperforms state of art techniques in terms of both absolute reconstruction error and processing time.
ComTrustO: Composite Trust-Based Ontology Framework for Information and Decision Fusion
2015-07-06
based definitions and models of trust have been studied in various domains [39]. Jules et al. [27] propose an intelligent and dynamic Service Level...Cognitive and affective trust in service relationships. Journal of Business Research, 58:500–507, 2005. [27] O. Jules , A. Hafid, and M.A. Serhani
Joint modality fusion and temporal context exploitation for semantic video analysis
NASA Astrophysics Data System (ADS)
Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.
2011-12-01
In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
Deng, Changjian; Lv, Kun; Shi, Debo; Yang, Bo; Yu, Song; He, Zhiyi; Yan, Jia
2018-06-12
In this paper, a novel feature selection and fusion framework is proposed to enhance the discrimination ability of gas sensor arrays for odor identification. Firstly, we put forward an efficient feature selection method based on the separability and the dissimilarity to determine the feature selection order for each type of feature when increasing the dimension of selected feature subsets. Secondly, the K-nearest neighbor (KNN) classifier is applied to determine the dimensions of the optimal feature subsets for different types of features. Finally, in the process of establishing features fusion, we come up with a classification dominance feature fusion strategy which conducts an effective basic feature. Experimental results on two datasets show that the recognition rates of Database I and Database II achieve 97.5% and 80.11%, respectively, when k = 1 for KNN classifier and the distance metric is correlation distance (COR), which demonstrates the superiority of the proposed feature selection and fusion framework in representing signal features. The novel feature selection method proposed in this paper can effectively select feature subsets that are conducive to the classification, while the feature fusion framework can fuse various features which describe the different characteristics of sensor signals, for enhancing the discrimination ability of gas sensors and, to a certain extent, suppressing drift effect.
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis
2012-05-01
In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.
Combining elements of information fusion and knowledge-based systems to support situation analysis
NASA Astrophysics Data System (ADS)
Roy, Jean
2006-04-01
Situation awareness has emerged as an important concept in military and public security environments. Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness for the decision maker(s). It is well established that information fusion, defined as the process of utilizing one or more information sources over time to assemble a representation of aspects of interest in an environment, is a key enabler to meeting the demanding requirements of situation analysis. However, although information fusion is important, developing and adopting a knowledge-centric view of situation analysis should provide a more holistic perspective of this process. This is based on the notion that awareness ultimately has to do with having knowledge of something. Moreover, not all of the situation elements and relationships of interest are directly observable. Those aspects of interest that cannot be observed must be inferred, i.e., derived as a conclusion from facts or premises, or by reasoning from evidence. This paper discusses aspects of knowledge, and how it can be acquired from experts, formally represented and stored in knowledge bases to be exploited by computer programs, and validated. Knowledge engineering is reviewed, with emphasis given to cognitive and ontological engineering. Facets of reasoning are discussed, along with inferencing methods that can be used in computer applications. Finally, combining elements of information fusion and knowledge-based systems, an overall approach and framework for the building of situation analysis support systems is presented.
LaDeau, Shannon L; Glass, Gregory E; Hobbs, N Thompson; Latimer, Andrew; Ostfeld, Richard S
2011-07-01
Ecologists worldwide are challenged to contribute solutions to urgent and pressing environmental problems by forecasting how populations, communities, and ecosystems will respond to global change. Rising to this challenge requires organizing ecological information derived from diverse sources and formally assimilating data with models of ecological processes. The study of infectious disease has depended on strategies for integrating patterns of observed disease incidence with mechanistic process models since John Snow first mapped cholera cases around a London water pump in 1854. Still, zoonotic and vector-borne diseases increasingly affect human populations, and methods used to successfully characterize directly transmitted diseases are often insufficient. We use four case studies to demonstrate that advances in disease forecasting require better understanding of zoonotic host and vector populations, as well of the dynamics that facilitate pathogen amplification and disease spillover into humans. In each case study, this goal is complicated by limited data, spatiotemporal variability in pathogen transmission and impact, and often, insufficient biological understanding. We present a conceptual framework for data-model fusion in infectious disease research that addresses these fundamental challenges using a hierarchical state-space structure to (1) integrate multiple data sources and spatial scales to inform latent parameters, (2) partition uncertainty in process and observation models, and (3) explicitly build upon existing ecological and epidemiological understanding. Given the constraints inherent in the study of infectious disease and the urgent need for progress, fusion of data and expertise via this type of conceptual framework should prove an indispensable tool.
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
Automatic Structural Parcellation of Mouse Brain MRI Using Multi-Atlas Label Fusion
Ma, Da; Cardoso, Manuel J.; Modat, Marc; Powell, Nick; Wells, Jack; Holmes, Holly; Wiseman, Frances; Tybulewicz, Victor; Fisher, Elizabeth; Lythgoe, Mark F.; Ourselin, Sébastien
2014-01-01
Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework. PMID:24475148
A Decision Fusion Framework for Treatment Recommendation Systems.
Mei, Jing; Liu, Haifeng; Li, Xiang; Xie, Guotong; Yu, Yiqin
2015-01-01
Treatment recommendation is a nontrivial task--it requires not only domain knowledge from evidence-based medicine, but also data insights from descriptive, predictive and prescriptive analysis. A single treatment recommendation system is usually trained or modeled with a limited (size or quality) source. This paper proposes a decision fusion framework, combining both knowledge-driven and data-driven decision engines for treatment recommendation. End users (e.g. using the clinician workstation or mobile apps) could have a comprehensive view of various engines' opinions, as well as the final decision after fusion. For implementation, we leverage several well-known fusion algorithms, such as decision templates and meta classifiers (of logistic and SVM, etc.). Using an outcome-driven evaluation metric, we compare the fusion engine with base engines, and our experimental results show that decision fusion is a promising way towards a more valuable treatment recommendation.
NASA Astrophysics Data System (ADS)
Tacnet, Jean-Marc; Dupouy, Guillaume; Carladous, Simon; Dezert, Jean; Batton-Hubert, Mireille
2017-04-01
In mountain areas, natural phenomena such as snow avalanches, debris-flows and rock-falls, put people and objects at risk with sometimes dramatic consequences. Risk is classically considered as a combination of hazard, the combination of the intensity and frequency of the phenomenon, and vulnerability which corresponds to the consequences of the phenomenon on exposed people and material assets. Risk management consists in identifying the risk level as well as choosing the best strategies for risk prevention, i.e. mitigation. In the context of natural phenomena in mountainous areas, technical and scientific knowledge is often lacking. Risk management decisions are therefore based on imperfect information. This information comes from more or less reliable sources ranging from historical data, expert assessments, numerical simulations etc. Finally, risk management decisions are the result of complex knowledge management and reasoning processes. Tracing the information and propagating information quality from data acquisition to decisions are therefore important steps in the decision-making process. One major goal today is therefore to assist decision-making while considering the availability, quality and reliability of information content and sources. A global integrated framework is proposed to improve the risk management process in a context of information imperfection provided by more or less reliable sources: uncertainty as well as imprecision, inconsistency and incompleteness are considered. Several methods are used and associated in an original way: sequential decision context description, development of specific multi-criteria decision-making methods, imperfection propagation in numerical modeling and information fusion. This framework not only assists in decision-making but also traces the process and evaluates the impact of information quality on decision-making. We focus and present two main developments. The first one relates to uncertainty and imprecision propagation in numerical modeling using both classical Monte-Carlo probabilistic approach and also so-called Hybrid approach using possibility theory. Second approach deals with new multi-criteria decision-making methods which consider information imperfection, source reliability, importance and conflict, using fuzzy sets as well as possibility and belief function theories. Implemented methods consider information imperfection propagation and information fusion in total aggregation methods such as AHP (Saaty, 1980) or partial aggregation methods such as the Electre outranking method (see Soft Electre Tri ) or decisions in certain but also risky or uncertain contexts (see new COWA-ER and FOWA-ER- Cautious and Fuzzy Ordered Weighted Averaging-Evidential Reasoning). For example, the ER-MCDA methodology considers expert assessment as a multi-criteria decision process based on imperfect information provided by more or less heterogeneous, reliable and conflicting sources: it mixes AHP, fuzzy sets theory, possibility theory and belief function theory using DSmT (Dezert-Smarandache Theory) framework which provides powerful fusion rules.
NASA Astrophysics Data System (ADS)
Sliva, Amy L.; Gorman, Joe; Voshell, Martin; Tittle, James; Bowman, Christopher
2016-05-01
The Dual Node Decision Wheels (DNDW) architecture concept was previously described as a novel approach toward integrating analytic and decision-making processes in joint human/automation systems in highly complex sociotechnical settings. In this paper, we extend the DNDW construct with a description of components in this framework, combining structures of the Dual Node Network (DNN) for Information Fusion and Resource Management with extensions on Rasmussen's Decision Ladder (DL) to provide guidance on constructing information systems that better serve decision-making support requirements. The DNN takes a component-centered approach to system design, decomposing each asset in terms of data inputs and outputs according to their roles and interactions in a fusion network. However, to ensure relevancy to and organizational fitment within command and control (C2) processes, principles from cognitive systems engineering emphasize that system design must take a human-centered systems view, integrating information needs and decision making requirements to drive the architecture design and capabilities of network assets. In the current work, we present an approach for structuring and assessing DNDW systems that uses a unique hybrid DNN top-down system design with a human-centered process design, combining DNN node decomposition with artifacts from cognitive analysis (i.e., system abstraction decomposition models, decision ladders) to provide work domain and task-level insights at different levels in an example intelligence, surveillance, and reconnaissance (ISR) system setting. This DNDW structure will ensure not only that the information fusion technologies and processes are structured effectively, but that the resulting information products will align with the requirements of human decision makers and be adaptable to different work settings .
Environmental Visualization and Horizontal Fusion
2005-10-01
the section on EVIS Rules. Federated Search – Discovering Content Another method of discovering services and their content has been implemented...in HF through a next-generation knowledge discovery framework called Federated Search . A virtual information space, called Collateral Space was...environmental mission effects products, is presented later in the paper. Federated Search allows users to search through Collateral Space data that is
Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.
Liu, Min; Wang, Xueping; Zhang, Hongzhong
2018-03-01
In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resource Mobile Robots
Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro
2013-01-01
This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments. PMID:24152933
Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots.
Marín, Leonardo; Vallés, Marina; Soriano, Ángel; Valera, Ángel; Albertos, Pedro
2013-10-21
This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.
Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.
Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li
2018-06-01
State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.
Robust QRS peak detection by multimodal information fusion of ECG and blood pressure signals.
Ding, Quan; Bai, Yong; Erol, Yusuf Bugra; Salas-Boni, Rebeca; Zhang, Xiaorong; Hu, Xiao
2016-11-01
QRS peak detection is a challenging problem when ECG signal is corrupted. However, additional physiological signals may also provide information about the QRS position. In this study, we focus on a unique benchmark provided by PhysioNet/Computing in Cardiology Challenge 2014 and Physiological Measurement focus issue: robust detection of heart beats in multimodal data, which aimed to explore robust methods for QRS detection in multimodal physiological signals. A dataset of 200 training and 210 testing records are used, where the testing records are hidden for evaluating the performance only. An information fusion framework for robust QRS detection is proposed by leveraging existing ECG and ABP analysis tools and combining heart beats derived from different sources. Results show that our approach achieves an overall accuracy of 90.94% and 88.66% on the training and testing datasets, respectively. Furthermore, we observe expected performance at each step of the proposed approach, as an evidence of the effectiveness of our approach. Discussion on the limitations of our approach is also provided.
Multi-view information fusion for automatic BI-RADS description of mammographic masses
NASA Astrophysics Data System (ADS)
Narvaez, Fabián; Díaz, Gloria; Romero, Eduardo
2011-03-01
Most CBIR-based CAD systems (Content Based Image Retrieval systems for Computer Aided Diagnosis) identify lesions that are eventually relevant. These systems base their analysis upon a single independent view. This article presents a CBIR framework which automatically describes mammographic masses with the BI-RADS lexicon, fusing information from the two mammographic views. After an expert selects a Region of Interest (RoI) at the two views, a CBIR strategy searches similar masses in the database by automatically computing the Mahalanobis distance between shape and texture feature vectors of the mammography. The strategy was assessed in a set of 400 cases, for which the suggested descriptions were compared with the ground truth provided by the data base. Two information fusion strategies were evaluated, allowing a retrieval precision rate of 89.6% in the best scheme. Likewise, the best performance obtained for shape, margin and pathology description, using a ROC methodology, was reported as AUC = 0.86, AUC = 0.72 and AUC = 0.85, respectively.
Fusion and Sense Making of Heterogeneous Sensor Network and Other Sources
2017-03-16
multimodal fusion framework that uses both training data and web resources for scene classification, the experimental results on the benchmark datasets...show that the proposed text-aided scene classification framework could significantly improve classification performance. Experimental results also show...human whose adaptability is achieved by reliability- dependent weighting of different sensory modalities. Experimental results show that the proposed
NASA Astrophysics Data System (ADS)
Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix
2017-12-01
Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment.
NASA Astrophysics Data System (ADS)
Chen, Chen; Hao, Huiyan; Jafari, Roozbeh; Kehtarnavaz, Nasser
2017-05-01
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Formulating Spatially Varying Performance in the Statistical Fusion Framework
Landman, Bennett A.
2012-01-01
To date, label fusion methods have primarily relied either on global (e.g. STAPLE, globally weighted vote) or voxelwise (e.g. locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets. PMID:22438513
Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.
Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A
2015-12-01
We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
RUBE: an XML-based architecture for 3D process modeling and model fusion
NASA Astrophysics Data System (ADS)
Fishwick, Paul A.
2002-07-01
Information fusion is a critical problem for science and engineering. There is a need to fuse information content specified as either data or model. We frame our work in terms of fusing dynamic and geometric models, to create an immersive environment where these models can be juxtaposed in 3D, within the same interface. The method by which this is accomplished fits well into other eXtensible Markup Language (XML) approaches to fusion in general. The task of modeling lies at the heart of the human-computer interface, joining the human to the system under study through a variety of sensory modalities. I overview modeling as a key concern for the Defense Department and the Air Force, and then follow with a discussion of past, current, and future work. Past work began with a package with C and has progressed, in current work, to an implementation in XML. Our current work is defined within the RUBE architecture, which is detailed in subsequent papers devoted to key components. We have built RUBE as a next generation modeling framework using our prior software, with research opportunities in immersive 3D and tangible user interfaces.
Facial expression recognition under partial occlusion based on fusion of global and local features
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji
2018-04-01
Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.
NASA Astrophysics Data System (ADS)
Newman, Andrew J.; Richardson, Casey L.; Kain, Sean M.; Stankiewicz, Paul G.; Guseman, Paul R.; Schreurs, Blake A.; Dunne, Jeffrey A.
2016-05-01
This paper introduces the game of reconnaissance blind multi-chess (RBMC) as a paradigm and test bed for understanding and experimenting with autonomous decision making under uncertainty and in particular managing a network of heterogeneous Intelligence, Surveillance and Reconnaissance (ISR) sensors to maintain situational awareness informing tactical and strategic decision making. The intent is for RBMC to serve as a common reference or challenge problem in fusion and resource management of heterogeneous sensor ensembles across diverse mission areas. We have defined a basic rule set and a framework for creating more complex versions, developed a web-based software realization to serve as an experimentation platform, and developed some initial machine intelligence approaches to playing it.
Fusion materials high energy-neutron studies. A status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doran, D.G.; Guinan, M.W.
1980-01-01
The objectives of this paper are (1) to provide background information on the US Magnetic Fusion Reactor Materials Program, (2) to provide a framework for evaluating nuclear data needs associated with high energy neutron irradiations, and (3) to show the current status of relevant high energy neutron studies. Since the last symposium, the greatest strides in cross section development have been taken in those areas providing FMIT design data, e.g., source description, shielding, and activation. In addition, many dosimetry cross sections have been tentatively extrapolated to 40 MeV and integral testing begun. Extensive total helium measurements have been made inmore » a variety of neutron spectra. Additional calculations are needed to assist in determining energy dependent cross sections.« less
National Center for Multisource Information Fusion
2009-04-01
discipline. The center has focused its efforts in solving the growing problems of exploiting massive quantities of diverse, and often...development of a comprehensive high level fusion framework that includes the addition of Levels 2, 3 and 4 type tools to the ECCARS...correlate IDS alerts into individual attacks and provide a threat assessment for the network. A comprehensive review of attack graphs was conducted
Multisensor fusion with non-optimal decision rules: the challenges of open world sensing
NASA Astrophysics Data System (ADS)
Minor, Christian; Johnson, Kevin
2014-05-01
In this work, simple, generic models of chemical sensing are used to simulate sensor array data and to illustrate the impact on overall system performance that specific design choices impart. The ability of multisensor systems to perform multianalyte detection (i.e., distinguish multiple targets) is explored by examining the distinction between fundamental design-related limitations stemming from mismatching of mixture composition to fused sensor measurement spaces, and limitations that arise from measurement uncertainty. Insight on the limits and potential of sensor fusion to robustly address detection tasks in realistic field conditions can be gained through an examination of a) the underlying geometry of both the composition space of sources one hopes to elucidate and the measurement space a fused sensor system is capable of generating, and b) the informational impact of uncertainty on both of these spaces. For instance, what is the potential impact on sensor fusion in an open world scenario where unknown interferants may contaminate target signals? Under complex and dynamic backgrounds, decision rules may implicitly become non-optimal and adding sensors may increase the amount of conflicting information observed. This suggests that the manner in which a decision rule handles sensor conflict can be critical in leveraging sensor fusion for effective open world sensing, and becomes exponentially more important as more sensors are added. Results and design considerations for handling conflicting evidence in Bayes and Dempster-Shafer fusion frameworks are presented. Bayesian decision theory is used to provide an upper limit on detector performance of simulated sensor systems.
On Fusing Recursive Traversals of K-d Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less
SmartMal: a service-oriented behavioral malware detection framework for mobile devices.
Wang, Chao; Wu, Zhizhong; Li, Xi; Zhou, Xuehai; Wang, Aili; Hung, Patrick C K
2014-01-01
This paper presents SmartMal--a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server's main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users' regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices.
SmartMal: A Service-Oriented Behavioral Malware Detection Framework for Mobile Devices
Wu, Zhizhong; Li, Xi; Zhou, Xuehai; Wang, Aili; Hung, Patrick C. K.
2014-01-01
This paper presents SmartMal—a novel service-oriented behavioral malware detection framework for vehicular and mobile devices. The highlight of SmartMal is to introduce service-oriented architecture (SOA) concepts and behavior analysis into the malware detection paradigms. The proposed framework relies on client-server architecture, the client continuously extracts various features and transfers them to the server, and the server's main task is to detect anomalies using state-of-art detection algorithms. Multiple distributed servers simultaneously analyze the feature vector using various detectors and information fusion is used to concatenate the results of detectors. We also propose a cycle-based statistical approach for mobile device anomaly detection. We accomplish this by analyzing the users' regular usage patterns. Empirical results suggest that the proposed framework and novel anomaly detection algorithm are highly effective in detecting malware on Android devices. PMID:25165729
Knowledge assistant: A sensor fusion framework for robotic environmental characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feddema, J.T.; Rivera, J.J.; Tucker, S.D.
1996-12-01
A prototype sensor fusion framework called the {open_quotes}Knowledge Assistant{close_quotes} has been developed and tested on a gantry robot at Sandia National Laboratories. This Knowledge Assistant guides the robot operator during the planning, execution, and post analysis stages of the characterization process. During the planning stage, the Knowledge Assistant suggests robot paths and speeds based on knowledge of sensors available and their physical characteristics. During execution, the Knowledge Assistant coordinates the collection of data through a data acquisition {open_quotes}specialist.{close_quotes} During execution and post analysis, the Knowledge Assistant sends raw data to other {open_quotes}specialists,{close_quotes} which include statistical pattern recognition software, a neuralmore » network, and model-based search software. After the specialists return their results, the Knowledge Assistant consolidates the information and returns a report to the robot control system where the sensed objects and their attributes (e.g. estimated dimensions, weight, material composition, etc.) are displayed in the world model. This paper highlights the major components of this system.« less
Data Fusion of Geographically Dispersed Information: Experience With the Scalable Data Grid
2011-03-01
framework that exposes calls to this HLA interface to registered plug-ins (Figure 2). The SDG exploits this plug-in to intercept and log messages and...implementation and use of the SDG and Hadoop Figure 4. Schematic meshrouter topology. Figure 5. Factored Meshrouter implementation, with application-specific...communications primitives. Yao, Ward, & Davis 92 ITEA Journal show promise for the T&E environments. It reported on experiments implementing the SDG and on
Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian
2014-03-21
This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.
Fusion of multi-tracer PET images for dose painting.
Lelandais, Benoît; Ruan, Su; Denœux, Thierry; Vera, Pierre; Gardin, Isabelle
2014-10-01
PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Early repositioning through compound set enrichment analysis: a knowledge-recycling strategy.
Temesi, Gergely; Bolgár, Bence; Arany, Adám; Szalai, Csaba; Antal, Péter; Mátyus, Péter
2014-04-01
Despite famous serendipitous drug repositioning success stories, systematic projects have not yet delivered the expected results. However, repositioning technologies are gaining ground in different phases of routine drug development, together with new adaptive strategies. We demonstrate the power of the compound information pool, the ever-growing heterogeneous information repertoire of approved drugs and candidates as an invaluable catalyzer in this transition. Systematic, computational utilization of this information pool for candidates in early phases is an open research problem; we propose a novel application of the enrichment analysis statistical framework for fusion of this information pool, specifically for the prediction of indications. Pharmaceutical consequences are formulated for a systematic and continuous knowledge recycling strategy, utilizing this information pool throughout the drug-discovery pipeline.
Left Ventricular Endocardium Tracking by Fusion of Biomechanical and Deformable Models
Gu, Jason
2014-01-01
This paper presents a framework for tracking left ventricular (LV) endocardium through 2D echocardiography image sequence. The framework is based on fusion of biomechanical (BM) model of the heart with the parametric deformable model. The BM model constitutive equation consists of passive and active strain energy functions. The deformations of the LV are obtained by solving the constitutive equations using ABAQUS FEM in each frame in the cardiac cycle. The strain energy functions are defined in two user subroutines for active and passive phases. Average fusion technique is used to fuse the BM and deformable model contours. Experimental results are conducted to verify the detected contours and the results are evaluated by comparing themto a created gold standard. The results and the evaluation proved that the framework has the tremendous potential to track and segment the LV through the whole cardiac cycle. PMID:24587814
Weighted score-level feature fusion based on Dempster-Shafer evidence theory for action recognition
NASA Astrophysics Data System (ADS)
Zhang, Guoliang; Jia, Songmin; Li, Xiuzhi; Zhang, Xiangyin
2018-01-01
The majority of human action recognition methods use multifeature fusion strategy to improve the classification performance, where the contribution of different features for specific action has not been paid enough attention. We present an extendible and universal weighted score-level feature fusion method using the Dempster-Shafer (DS) evidence theory based on the pipeline of bag-of-visual-words. First, the partially distinctive samples in the training set are selected to construct the validation set. Then, local spatiotemporal features and pose features are extracted from these samples to obtain evidence information. The DS evidence theory and the proposed rule of survival of the fittest are employed to achieve evidence combination and calculate optimal weight vectors of every feature type belonging to each action class. Finally, the recognition results are deduced via the weighted summation strategy. The performance of the established recognition framework is evaluated on Penn Action dataset and a subset of the joint-annotated human metabolome database (sub-JHMDB). The experiment results demonstrate that the proposed feature fusion method can adequately exploit the complementarity among multiple features and improve upon most of the state-of-the-art algorithms on Penn Action and sub-JHMDB datasets.
SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.
Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou
2015-11-01
In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.
DOT National Transportation Integrated Search
1996-09-01
THIS PROJECT HAS ACCOMPLISHED THREE SIGNIFICANT TASKS. FIRST, A STATE-OF-THE-ART LITERATURE REVIEW HAS PROVIDED AN ORGANIZATIONAL FRAMEWORK FOR CATEGORIZING THE VARIOUS DATA FUSION PROJECTS THAT HAVE BEEN CONDUCTED TO DATE. A POPULAR TYPOLOGY WAS DIS...
Energetic particle instabilities in fusion plasmas
NASA Astrophysics Data System (ADS)
Sharapov, S. E.; Alper, B.; Berk, H. L.; Borba, D. N.; Breizman, B. N.; Challis, C. D.; Classen, I. G. J.; Edlund, E. M.; Eriksson, J.; Fasoli, A.; Fredrickson, E. D.; Fu, G. Y.; Garcia-Munoz, M.; Gassner, T.; Ghantous, K.; Goloborodko, V.; Gorelenkov, N. N.; Gryaznevich, M. P.; Hacquin, S.; Heidbrink, W. W.; Hellesen, C.; Kiptily, V. G.; Kramer, G. J.; Lauber, P.; Lilley, M. K.; Lisak, M.; Nabais, F.; Nazikian, R.; Nyqvist, R.; Osakabe, M.; Perez von Thun, C.; Pinches, S. D.; Podesta, M.; Porkolab, M.; Shinohara, K.; Schoepf, K.; Todo, Y.; Toi, K.; Van Zeeland, M. A.; Voitsekhovich, I.; White, R. B.; Yavorskij, V.; TG, ITPA EP; Contributors, JET-EFDA
2013-10-01
Remarkable progress has been made in diagnosing energetic particle instabilities on present-day machines and in establishing a theoretical framework for describing them. This overview describes the much improved diagnostics of Alfvén instabilities and modelling tools developed world-wide, and discusses progress in interpreting the observed phenomena. A multi-machine comparison is presented giving information on the performance of both diagnostics and modelling tools for different plasma conditions outlining expectations for ITER based on our present knowledge.
Technologies for Army Knowledge Fusion
2004-09-01
interpret it in context and understand the implications (Alberts et al., 2002). Note that the knowledge / information fusion issue arises immediately here...Army Knowledge Fusion Richard Scherl Department of Computer Science Monmouth University Dana L. Ulery Computational and Information Sciences...civilian and military sources. Knowledge fusion, also called information fusion and multisensor data fusion, names the body of techniques needed to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loparo, Kenneth; Kolacinski, Richard; Threeanaew, Wanchat
A central goal of the work was to enable both the extraction of all relevant information from sensor data, and the application of information gained from appropriate processing and fusion at the system level to operational control and decision-making at various levels of the control hierarchy through: 1. Exploiting the deep connection between information theory and the thermodynamic formalism, 2. Deployment using distributed intelligent agents with testing and validation in a hardware-in-the loop simulation environment. Enterprise architectures are the organizing logic for key business processes and IT infrastructure and, while the generality of current definitions provides sufficient flexibility, the currentmore » architecture frameworks do not inherently provide the appropriate structure. Of particular concern is that existing architecture frameworks often do not make a distinction between ``data'' and ``information.'' This work defines an enterprise architecture for health and condition monitoring of power plant equipment and further provides the appropriate foundation for addressing shortcomings in current architecture definition frameworks through the discovery of the information connectivity between the elements of a power generation plant. That is, to identify the correlative structure between available observations streams using informational measures. The principle focus here is on the implementation and testing of an emergent, agent-based, algorithm based on the foraging behavior of ants for eliciting this structure and on measures for characterizing differences between communication topologies. The elicitation algorithms are applied to data streams produced by a detailed numerical simulation of Alstom’s 1000 MW ultra-super-critical boiler and steam plant. The elicitation algorithm and topology characterization can be based on different informational metrics for detecting connectivity, e.g. mutual information and linear correlation.« less
Cross contrast multi-channel image registration using image synthesis for MR brain images.
Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L
2017-02-01
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.
Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lu, Jian
Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Information fusion: telling the story (or threat narrative)
NASA Astrophysics Data System (ADS)
Fenstermacher, Laurie
2014-06-01
Today's operators face a "double whammy" - the need to process increasing amounts of information, including "Twitter-INT"1 (social information such as Facebook, You-Tube videos, blogs, Twitter) as well as the need to discern threat signatures in new security environments, including those in which the airspace is contested. To do this will require the Air Force to "fuse and leverage its vast capabilities in new ways."2 For starters, the integration of quantitative and qualitative information must be done in a way that preserves important contextual information since the goal increasingly is to identify and mitigate violence before it occurs. To do so requires a more nuanced understanding of the environment being sensed, including the human environment, ideally from the "emic" perspective; that is, from the perspective of that individual or group. This requires not only data and information that informs the understanding of how the individuals and/or groups see themselves and others (social identity) but also information on how that identity filters information in their environment which, in turn, shapes their behaviors.3 The goal is to piece together the individual and/or collective narratives regarding threat, the threat narrative, from various sources of information. Is there a threat? If so, what is it? What is motivating the threat? What is the intent of those who pose the threat and what are their capabilities and their vulnerabilities?4 This paper will describe preliminary investigations regarding the application of prototype hybrid information fusion method based on the threat narrative framework.
A framework for small infrared target real-time visual enhancement
NASA Astrophysics Data System (ADS)
Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin
2015-03-01
This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.
Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device
He, Xiang; Aloi, Daniel N.; Li, Jia
2015-01-01
Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design. PMID:26694387
Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device.
He, Xiang; Aloi, Daniel N; Li, Jia
2015-12-14
Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design.
Semi-autonomous remote sensing time series generation tool
NASA Astrophysics Data System (ADS)
Babu, Dinesh Kumar; Kaufmann, Christof; Schmidt, Marco; Dhams, Thorsten; Conrad, Christopher
2017-10-01
High spatial and temporal resolution data is vital for crop monitoring and phenology change detection. Due to the lack of satellite architecture and frequent cloud cover issues, availability of daily high spatial data is still far from reality. Remote sensing time series generation of high spatial and temporal data by data fusion seems to be a practical alternative. However, it is not an easy process, since it involves multiple steps and also requires multiple tools. In this paper, a framework of Geo Information System (GIS) based tool is presented for semi-autonomous time series generation. This tool will eliminate the difficulties by automating all the steps and enable the users to generate synthetic time series data with ease. Firstly, all the steps required for the time series generation process are identified and grouped into blocks based on their functionalities. Later two main frameworks are created, one to perform all the pre-processing steps on various satellite data and the other one to perform data fusion to generate time series. The two frameworks can be used individually to perform specific tasks or they could be combined to perform both the processes in one go. This tool can handle most of the known geo data formats currently available which makes it a generic tool for time series generation of various remote sensing satellite data. This tool is developed as a common platform with good interface which provides lot of functionalities to enable further development of more remote sensing applications. A detailed description on the capabilities and the advantages of the frameworks are given in this paper.
FuzzyFusion: an application architecture for multisource information fusion
NASA Astrophysics Data System (ADS)
Fox, Kevin L.; Henning, Ronda R.
2009-04-01
The correlation of information from disparate sources has long been an issue in data fusion research. Traditional data fusion addresses the correlation of information from sources as diverse as single-purpose sensors to all-source multi-media information. Information system vulnerability information is similar in its diversity of sources and content, and in the desire to draw a meaningful conclusion, namely, the security posture of the system under inspection. FuzzyFusionTM, A data fusion model that is being applied to the computer network operations domain is presented. This model has been successfully prototyped in an applied research environment and represents a next generation assurance tool for system and network security.
FLYSAFE, nowcasting of in flight icing supporting aircrew decision making process
NASA Astrophysics Data System (ADS)
Drouin, A.; Le Bot, C.
2009-09-01
FLYSAFE is an Integrated Project of the 6th framework of the European Commission with the aim to improve flight safety through the development of a Next Generation Integrated Surveillance System (NGISS). The NGISS provides information to the flight crew on the three major external hazards for aviation: weather, air traffic and terrain. The NGISS has the capability of displaying data about all three hazards on a single display screen, facilitating rapid pilot appreciation of the situation by the flight crew. Weather Information Management Systems (WIMS) were developed to provide the NGISS and the flight crew with weather related information on in-flight icing, thunderstorms, wake-vortex and clear-air turbulence. These products are generated on the ground from observations and model forecasts. WIMS supply relevant information on three different scales: global, regional and local (over airport Terminal Manoeuvring Area). Within the flysafe program, around 120 hours of flight trials were performed during February 2008 and August 2008. Two aircraft were involved each with separate objectives : - to assess FLYSAFE's innovative solutions for the data-link, on-board data fusion, data-display, and data-updates during flight; - to evaluate the new weather information management systems (in flight icing and thunderstorms) using in-situ measurements recorded on board the test aircraft. In this presentation we will focus on the in-flight icing nowcasting system developed at Météo France in the framework of FLYSAFE: the local ICE WIMS. The local ICE WIMS is based on data fusion. The most relevant information for icing detection is extracted from the numerical weather prediction model, the infra-red and visible satellite imagery and the ground weather radar reflectivities. After a presentation of the local ICE WIMS, we detail the evaluation of the local ICE WIMS performed using the winter and summer flight trial data.
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Ogiela, Marek R.
2012-10-01
The proposed framework for cognitive analysis of perfusion computed tomography images is a fusion of image processing, pattern recognition, and image analysis procedures. The output data of the algorithm consists of: regions of perfusion abnormalities, anatomy atlas description of brain tissues, measures of perfusion parameters, and prognosis for infracted tissues. That information is superimposed onto volumetric computed tomography data and displayed to radiologists. Our rendering algorithm enables rendering large volumes on off-the-shelf hardware. This portability of rendering solution is very important because our framework can be run without using expensive dedicated hardware. The other important factors are theoretically unlimited size of rendered volume and possibility of trading of image quality for rendering speed. Such rendered, high quality visualizations may be further used for intelligent brain perfusion abnormality identification, and computer aided-diagnosis of selected types of pathologies.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
URREF Reliability Versus Credibility in Information Fusion
2013-07-01
Fusion, Vol. 3, No. 2, December, 2008. [31] E. Blasch, J. Dezert, and P. Valin , “DSMT Applied to Seismic and Acoustic Sensor Fusion,” Proc. IEEE Nat...44] E. Blasch, P. Valin , E. Bossé, “Measures of Effectiveness for High- Level Fusion,” Int. Conference on Information Fusion, 2010. [45] X. Mei, H...and P. Valin , “Information Fusion Measures of Effectiveness (MOE) for Decision Support,” Proc. SPIE 8050, 2011. [49] Y. Zheng, W. Dong, and E
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
SuperIdentity: Fusion of Identity across Real and Cyber Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, Sue; Creese, Sadie; Guest, Richard
Under both benign and malign circumstances, people now manage a spectrum of identities across both real-world and cyber domains. Our belief, however, is that all these instances ultimately track back for an individual to reflect a single 'SuperIdentity'. This paper outlines the assumptions underpinning the SuperIdentity Project, describing the innovative use of data fusion to incorporate novel real-world and cyber cues into a rich framework appropriate for modern identity. The proposed combinatorial model will support a robust identification or authentication decision, with confidence indexed both by the level of trust in data provenance, and the diagnosticity of the identity factorsmore » being used. Additionally, the exploration of correlations between factors may underpin the more intelligent use of identity information so that known information may be used to predict previously hidden information. With modern living supporting the 'distribution of identity' across real and cyber domains, and with criminal elements operating in increasingly sophisticated ways in the hinterland between the two, this approach is suggested as a way forwards, and is discussed in terms of its impact on privacy, security, and the detection of threat.« less
NASA Astrophysics Data System (ADS)
Militello, F.; Farley, T.; Mukhi, K.; Walkden, N.; Omotani, J. T.
2018-05-01
A statistical framework was introduced in Militello and Omotani [Nucl. Fusion 56, 104004 (2016)] to correlate the dynamics and statistics of L-mode and inter-ELM plasma filaments with the radial profiles of thermodynamic quantities they generate in the Scrape Off Layer. This paper extends the framework to cases in which the filaments are emitted from the separatrix at different toroidal positions and with a finite toroidal velocity. It is found that the toroidal velocity does not affect the profiles, while the toroidal distribution of filament emission renormalises the waiting time between two events. Experimental data collected by visual camera imaging are used to evaluate the statistics of the fluctuations, to inform the choice of the probability distribution functions used in the application of the framework. It is found that the toroidal separation of the filaments is exponentially distributed, thus suggesting the lack of a toroidal modal structure. Finally, using these measurements, the framework is applied to an experimental case and good agreement is found.
Luo, Xiongbiao; Wan, Ying; He, Xiangjian
2015-04-01
Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.
Self-recognition in corals facilitates deep-sea habitat engineering
Hennige, Sebastian J; Morrison, Cheryl L.; Form, Armin U.; Buscher, Janina; Kamenos, Nicholas A.; Roberts, J. Murray
2014-01-01
The ability of coral reefs to engineer complex three-dimensional habitats is central to their success and the rich biodiversity they support. In tropical reefs, encrusting coralline algae bind together substrates and dead coral framework to make continuous reef structures, but beyond the photic zone, the cold-water coral Lophelia pertusa also forms large biogenic reefs, facilitated by skeletal fusion. Skeletal fusion in tropical corals can occur in closely related or juvenile individuals as a result of non-aggressive skeletal overgrowth or allogeneic tissue fusion, but contact reactions in many species result in mortality if there is no ‘self-recognition’ on a broad species level. This study reveals areas of ‘flawless’ skeletal fusion in Lophelia pertusa, potentially facilitated by allogeneic tissue fusion, are identified as having small aragonitic crystals or low levels of crystal organisation, and strong molecular bonding. Regardless of the mechanism, the recognition of ‘self’ between adjacent L. pertusa colonies leads to no observable mortality, facilitates ecosystem engineering and reduces aggression-related energetic expenditure in an environment where energy conservation is crucial. The potential for self-recognition at a species level, and subsequent skeletal fusion in framework-forming cold-water corals is an important first step in understanding their significance as ecological engineers in deep-seas worldwide.
Multi-sources data fusion framework for remote triage prioritization in telehealth.
Salman, O H; Rasid, M F A; Saripan, M I; Subramaniam, S K
2014-09-01
The healthcare industry is streamlining processes to offer more timely and effective services to all patients. Computerized software algorithm and smart devices can streamline the relation between users and doctors by providing more services inside the healthcare telemonitoring systems. This paper proposes a multi-sources framework to support advanced healthcare applications. The proposed framework named Multi Sources Healthcare Architecture (MSHA) considers multi-sources: sensors (ECG, SpO2 and Blood Pressure) and text-based inputs from wireless and pervasive devices of Wireless Body Area Network. The proposed framework is used to improve the healthcare scalability efficiency by enhancing the remote triaging and remote prioritization processes for the patients. The proposed framework is also used to provide intelligent services over telemonitoring healthcare services systems by using data fusion method and prioritization technique. As telemonitoring system consists of three tiers (Sensors/ sources, Base station and Server), the simulation of the MSHA algorithm in the base station is demonstrated in this paper. The achievement of a high level of accuracy in the prioritization and triaging patients remotely, is set to be our main goal. Meanwhile, the role of multi sources data fusion in the telemonitoring healthcare services systems has been demonstrated. In addition to that, we discuss how the proposed framework can be applied in a healthcare telemonitoring scenario. Simulation results, for different symptoms relate to different emergency levels of heart chronic diseases, demonstrate the superiority of our algorithm compared with conventional algorithms in terms of classify and prioritize the patients remotely.
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.
/sup 18/O + /sup 12/C fusion-evaporation reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heusch, B; Beck, C; Coffin, J P
1980-01-01
A study of the /sup 18/O + /sup 12/C fusion evaporation reaction has been undertaken for 2 reasons: to make a systematic study of the formation cross section for each individual evaporation residue over a broad excitation energy region in the compound nucleus /sup 30/Si:30 to 62 MeV; and to compare all results to fusion-evaporation calculations done in the framework of the Hauser-Feschbach statistical model.
volBrain: An Online MRI Brain Volumetry System
Manjón, José V.; Coupé, Pierrick
2016-01-01
The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results. PMID:27512372
volBrain: An Online MRI Brain Volumetry System.
Manjón, José V; Coupé, Pierrick
2016-01-01
The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results.
The Fusion Model of Intelligent Transportation Systems Based on the Urban Traffic Ontology
NASA Astrophysics Data System (ADS)
Yang, Wang-Dong; Wang, Tao
On these issues unified representation of urban transport information using urban transport ontology, it defines the statute and the algebraic operations of semantic fusion in ontology level in order to achieve the fusion of urban traffic information in the semantic completeness and consistency. Thus this paper takes advantage of the semantic completeness of the ontology to build urban traffic ontology model with which we resolve the problems as ontology mergence and equivalence verification in semantic fusion of traffic information integration. Information integration in urban transport can increase the function of semantic fusion, and reduce the amount of data integration of urban traffic information as well enhance the efficiency and integrity of traffic information query for the help, through the practical application of intelligent traffic information integration platform of Changde city, the paper has practically proved that the semantic fusion based on ontology increases the effect and efficiency of the urban traffic information integration, reduces the storage quantity, and improve query efficiency and information completeness.
2011-12-28
specify collaboration constraints that occur in Java and XML frameworks and that the collaboration constraints from these frameworks matter in practice. (a...programming language boundaries, and Chapter 6 and Appendix A demonstrate that Fusion can specify constraints across both Java and XML in practice. (c...designed JUnit, Josh Bloch designed Java Collec- tions, and Krzysztof Cwalina designed the .NET Framework APIs. While all of these frameworks are very
Dynamic Information Collection and Fusion
2015-12-02
AFRL-AFOSR-VA-TR-2016-0069 DYNAMIC INFORMATION COLLECTION AND FUSION Venugopal Veeravalli UNIVERSITY OF ILLINOIS CHAMPAIGN Final Report 12/02/2015...TITLE AND SUBTITLE Dynamic Information Collection and Fusion 5a. CONTRACT NUMBER FA9550-10-1-0458 5b. GRANT NUMBER AF FA9550-10-1-0458 5c. PROGRAM...information collection, fusion , and inference from diverse modalities Our research has been organized under three inter-related thrusts. The first thrust
SDIA: A dynamic situation driven information fusion algorithm for cloud environment
NASA Astrophysics Data System (ADS)
Guo, Shuhang; Wang, Tong; Wang, Jian
2017-09-01
Information fusion is an important issue in information integration domain. In order to form an extensive information fusion technology under the complex and diverse situations, a new information fusion algorithm is proposed. Firstly, a fuzzy evaluation model of tag utility was proposed that can be used to count the tag entropy. Secondly, a ubiquitous situation tag tree model is proposed to define multidimensional structure of information situation. Thirdly, the similarity matching between the situation models is classified into three types: the tree inclusion, the tree embedding, and the tree compatibility. Next, in order to reduce the time complexity of the tree compatible matching algorithm, a fast and ordered tree matching algorithm is proposed based on the node entropy, which is used to support the information fusion by ubiquitous situation. Since the algorithm revolve from the graph theory of disordered tree matching algorithm, it can improve the information fusion present recall rate and precision rate in the situation. The information fusion algorithm is compared with the star and the random tree matching algorithm, and the difference between the three algorithms is analyzed in the view of isomorphism, which proves the innovation and applicability of the algorithm.
Information Fusion - Methods and Aggregation Operators
NASA Astrophysics Data System (ADS)
Torra, Vicenç
Information fusion techniques are commonly applied in Data Mining and Knowledge Discovery. In this chapter, we will give an overview of such applications considering their three main uses. This is, we consider fusion methods for data preprocessing, model building and information extraction. Some aggregation operators (i.e. particular fusion methods) and their properties are briefly described as well.
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach
Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz
2017-01-01
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857
GNSS-ISR data fusion: General framework with application to the high-latitude ionosphere
NASA Astrophysics Data System (ADS)
Semeter, Joshua; Hirsch, Michael; Lind, Frank; Coster, Anthea; Erickson, Philip; Pankratius, Victor
2016-03-01
A mathematical framework is presented for the fusion of electron density measured by incoherent scatter radar (ISR) and total electron content (TEC) measured using global navigation satellite systems (GNSS). Both measurements are treated as projections of an unknown density field (for GNSS-TEC the projection is tomographic; for ISR the projection is a weighted average over a local spatial region) and discrete inverse theory is applied to obtain a higher fidelity representation of the field than could be obtained from either modality individually. The specific implementation explored herein uses the interpolated ISR density field as initial guess to the combined inverse problem, which is subsequently solved using maximum entropy regularization. Simulations involving a dense meridional network of GNSS receivers near the Poker Flat ISR demonstrate the potential of this approach to resolve sub-beam structure in ISR measurements. Several future directions are outlined, including (1) data fusion using lower level (lag product) ISR data, (2) consideration of the different temporal sampling rates, (3) application of physics-based regularization, (4) consideration of nonoptimal observing geometries, and (5) use of an ISR simulation framework for optimal experiment design.
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.
Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz
2017-05-19
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.
Revisiting the JDL Model for Information Exploitation
2013-07-01
High-Level Information Fusion Management and Systems Design, Artech House, Norwood, MA, 2012. [10] E. Blasch, D. A. Lambert, P. Valin , M. M. Kokar...Fusion – Fusion2012 Panel Discussion,” Int. Conf. on Info Fusion, 2012. [29] E. P. Blasch, P. Valin , A-L. Jousselme, et al., “Top Ten Trends in High...P. Valin , E. Bosse, M. Nilsson, J. Van Laere, et al., “Implication of Culture: User Roles in Information Fusion for Enhanced Situational
Classification Accuracy Increase Using Multisensor Data Fusion
NASA Astrophysics Data System (ADS)
Makarau, A.; Palubinskas, G.; Reinartz, P.
2011-09-01
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.
Statistical Mechanics of Viral Entry
NASA Astrophysics Data System (ADS)
Zhang, Yaojun; Dudko, Olga K.
2015-01-01
Viruses that have lipid-membrane envelopes infect cells by fusing with the cell membrane to release viral genes. Membrane fusion is known to be hindered by high kinetic barriers associated with drastic structural rearrangements—yet viral infection, which occurs by fusion, proceeds on remarkably short time scales. Here, we present a quantitative framework that captures the principles behind the invasion strategy shared by all enveloped viruses. The key to this strategy—ligand-triggered conformational changes in the viral proteins that pull the membranes together—is treated as a set of concurrent, bias field-induced activated rate processes. The framework results in analytical solutions for experimentally measurable characteristics of virus-cell fusion and enables us to express the efficiency of the viral strategy in quantitative terms. The predictive value of the theory is validated through simulations and illustrated through recent experimental data on influenza virus infection.
Decentralized cooperative TOA/AOA target tracking for hierarchical wireless sensor networks.
Chen, Ying-Chih; Wen, Chih-Yu
2012-11-08
This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processing is conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for the localization task. The proposed energy-efficient tracking algorithm allows each sub-cluster member to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for object position estimation.
Self-assessed performance improves statistical fusion of image labels
Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.
2014-01-01
Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion. PMID:24593721
Probing the fusion of neutron-rich nuclei with re-accelerated radioactive beams
NASA Astrophysics Data System (ADS)
Vadas, J.; Singh, Varinderjit; Wiggins, B. B.; Huston, J.; Hudan, S.; deSouza, R. T.; Lin, Z.; Horowitz, C. J.; Chbihi, A.; Ackermann, D.; Famiano, M.; Brown, K. W.
2018-03-01
We report the first measurement of the fusion excitation functions for K,4739+28Si at near-barrier energies. Evaporation residues resulting from the fusion process were identified by direct measurement of their energy and time of flight with high geometric efficiency. At the lowest incident energy, the cross section measured for the neutron-rich 47K-induced reaction is ≈6 times larger than that of the β -stable system. This experimental approach, both in measurement and in analysis, demonstrates how to efficiently measure fusion with low-intensity re-accelerated radioactive beams, establishing the framework for future studies.
NASA Astrophysics Data System (ADS)
Bagheri, H.; Schmitt, M.; Zhu, X. X.
2017-05-01
Recently, with InSAR data provided by the German TanDEM-X mission, a new global, high-resolution Digital Elevation Model (DEM) has been produced by the German Aerospace Center (DLR) with unprecedented height accuracy. However, due to SAR-inherent sensor specifics, its quality decreases over urban areas, making additional improvement necessary. On the other hand, DEMs derived from optical remote sensing imagery, such as Cartosat-1 data, have an apparently greater resolution in urban areas, making their fusion with TanDEM-X elevation data a promising perspective. The objective of this paper is two-fold: First, the height accuracies of TanDEM-X and Cartosat-1 elevation data over different land types are empirically evaluated in order to analyze the potential of TanDEM-XCartosat- 1 DEM data fusion. After the quality assessment, urban DEM fusion using weighted averaging is investigated. In this experiment, both weight maps derived from the height error maps delivered with the DEM data, as well as more sophisticated weight maps predicted by a procedure based on artificial neural networks (ANNs) are compared. The ANN framework employs several features that can describe the height residual performance to predict the weights used in the subsequent fusion step. The results demonstrate that especially the ANN-based framework is able to improve the quality of the final DEM through data fusion.
The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korsager, Anne Sofie, E-mail: asko@hst.aau.dk; Østergaard, Lasse Riis; Fortunati, Valerio
2015-04-15
Purpose: An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. Methods: A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T{sub 2}-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas andmore » intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. Results: A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. Conclusions: This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.« less
NASA Astrophysics Data System (ADS)
Hunger, Sebastian; Karrasch, Pierre; Wessollek, Christine
2016-10-01
The European Water Framework Directive (Directive 2000/60/EC) is a mandatory agreement that guides the member states of the European Union in the field of water policy to fulfill the requirements for reaching the aim of the good ecological status of water bodies. In the last years several workflows and methods were developed to determine and evaluate the characteristics and the status of the water bodies. Due to their area measurements remote sensing methods are a promising approach to constitute a substantial additional value. With increasing availability of optical and radar remote sensing data the development of new methods to extract information from both types of remote sensing data is still in progress. Since most limitations of these data sets do not agree the fusion of both data sets to gain data with higher spectral resolution features the potential to obtain additional information in contrast to the separate processing of the data. Based thereupon this study shall research the potential of multispectral and radar remote sensing data and the potential of their fusion for the assessment of the parameters of water body structure. Due to the medium spatial resolution of the freely available multispectral Sentinel-2 data sets especially the surroundings of the water bodies and their land use are part of this study. SAR data is provided by the Sentinel-1 satellite. Different image fusion methods are tested and the combined products of both data sets are evaluated afterwards. The evaluation of the single data sets and the fused data sets is performed by means of a maximum-likelihood classification and several statistical measurements. The results indicate that the combined use of different remote sensing data sets can have an added value.
Xie, Weihong; Yu, Yang
2017-01-01
Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062
Liang, Fan; Xie, Weihong; Yu, Yang
2017-01-01
Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively "switch" from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.
Prototype of Kepler Processing Workflows For Microscopy And Neuroinformatics
Astakhov, V.; Bandrowski, A.; Gupta, A.; Kulungowski, A.W.; Grethe, J.S.; Bouwer, J.; Molina, T.; Rowley, V.; Penticoff, S.; Terada, M.; Wong, W.; Hakozaki, H.; Kwon, O.; Martone, M.E.; Ellisman, M.
2016-01-01
We report on progress of employing the Kepler workflow engine to prototype “end-to-end” application integration workflows that concern data coming from microscopes deployed at the National Center for Microscopy Imaging Research (NCMIR). This system is built upon the mature code base of the Cell Centered Database (CCDB) and integrated rule-oriented data system (IRODS) for distributed storage. It provides integration with external projects such as the Whole Brain Catalog (WBC) and Neuroscience Information Framework (NIF), which benefit from NCMIR data. We also report on specific workflows which spawn from main workflows and perform data fusion and orchestration of Web services specific for the NIF project. This “Brain data flow” presents a user with categorized information about sources that have information on various brain regions. PMID:28479932
History of Nuclear Fusion Research in Japan
NASA Astrophysics Data System (ADS)
Iguchi, Harukazu; Matsuoka, Keisuke; Kimura, Kazue; Namba, Chusei; Matsuda, Shinzaburo
In the late 1950s just after the atomic energy research was opened worldwide, there was a lively discussion among scientists on the strategy of nuclear fusion research in Japan. Finally, decision was made that fusion research should be started from the basic, namely, research on plasma physics and from cultivation of human resources at universities under the Ministry of Education, Science and Culture (MOE). However, an endorsement was given that construction of an experimental device for fusion research would be approved sooner or later. Studies on toroidal plasma confinement started at Japan Atomic Energy Research Institute (JAERI) under the Science and Technology Agency (STA) in the mid-1960s. Dualistic fusion research framework in Japan was established. This structure has lasted until now. Fusion research activities over the last 50 years are described by the use of a flowchart, which is convenient to glance the historical development of fusion research in Japan.
chimeraviz: a tool for visualizing chimeric RNA.
Lågstad, Stian; Zhao, Sen; Hoff, Andreas M; Johannessen, Bjarne; Lingjærde, Ole Christian; Skotheim, Rolf I
2017-09-15
Advances in high-throughput RNA sequencing have enabled more efficient detection of fusion transcripts, but the technology and associated software used for fusion detection from sequencing data often yield a high false discovery rate. Good prioritization of the results is important, and this can be helped by a visualization framework that automatically integrates RNA data with known genomic features. Here we present chimeraviz , a Bioconductor package that automates the creation of chimeric RNA visualizations. The package supports input from nine different fusion-finder tools: deFuse, EricScript, InFusion, JAFFA, FusionCatcher, FusionMap, PRADA, SOAPfuse and STAR-FUSION. chimeraviz is an R package available via Bioconductor ( https://bioconductor.org/packages/release/bioc/html/chimeraviz.html ) under Artistic-2.0. Source code and support is available at GitHub ( https://github.com/stianlagstad/chimeraviz ). rolf.i.skotheim@rr-research.no. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
A Smartphone-Based Driver Safety Monitoring System Using Data Fusion
Lee, Boon-Giin; Chung, Wan-Young
2012-01-01
This paper proposes a method for monitoring driver safety levels using a data fusion approach based on several discrete data types: eye features, bio-signal variation, in-vehicle temperature, and vehicle speed. The driver safety monitoring system was developed in practice in the form of an application for an Android-based smartphone device, where measuring safety-related data requires no extra monetary expenditure or equipment. Moreover, the system provides high resolution and flexibility. The safety monitoring process involves the fusion of attributes gathered from different sensors, including video, electrocardiography, photoplethysmography, temperature, and a three-axis accelerometer, that are assigned as input variables to an inference analysis framework. A Fuzzy Bayesian framework is designed to indicate the driver’s capability level and is updated continuously in real-time. The sensory data are transmitted via Bluetooth communication to the smartphone device. A fake incoming call warning service alerts the driver if his or her safety level is suspiciously compromised. Realistic testing of the system demonstrates the practical benefits of multiple features and their fusion in providing a more authentic and effective driver safety monitoring. PMID:23247416
Progressive multi-atlas label fusion by dictionary evolution.
Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang
2017-02-01
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.
Context-rich semantic framework for effective data-to-decisions in coalition networks
NASA Astrophysics Data System (ADS)
Grueneberg, Keith; de Mel, Geeth; Braines, Dave; Wang, Xiping; Calo, Seraphin; Pham, Tien
2013-05-01
In a coalition context, data fusion involves combining of soft (e.g., field reports, intelligence reports) and hard (e.g., acoustic, imagery) sensory data such that the resulting output is better than what it would have been if the data are taken individually. However, due to the lack of explicit semantics attached with such data, it is difficult to automatically disseminate and put the right contextual data in the hands of the decision makers. In order to understand the data, explicit meaning needs to be added by means of categorizing and/or classifying the data in relationship to each other from base reference sources. In this paper, we present a semantic framework that provides automated mechanisms to expose real-time raw data effectively by presenting appropriate information needed for a given situation so that an informed decision could be made effectively. The system utilizes controlled natural language capabilities provided by the ITA (International Technology Alliance) Controlled English (CE) toolkit to provide a human-friendly semantic representation of messages so that the messages can be directly processed in human/machine hybrid environments. The Real-time Semantic Enrichment (RTSE) service adds relevant contextual information to raw data streams from domain knowledge bases using declarative rules. The rules define how the added semantics and context information are derived and stored in a semantic knowledge base. The software framework exposes contextual information from a variety of hard and soft data sources in a fast, reliable manner so that an informed decision can be made using semantic queries in intelligent software systems.
NASA Astrophysics Data System (ADS)
Krysta, Monika; Kushida, Noriyuki; Kotselko, Yuriy; Carter, Jerry
2016-04-01
Possibilities of associating information from four pillars constituting CTBT monitoring and verification regime, namely seismic, infrasound, hydracoustic and radionuclide networks, have been explored by the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) for a long time. Based on a concept of overlying waveform events with the geographical regions constituting possible sources of the detected radionuclides, interactive and non-interactive tools were built in the past. Based on the same concept, a design of a prototype of a Fused Event Bulletin was proposed recently. One of the key design elements of the proposed approach is the ability to access fusion results from either the radionuclide or from the waveform technologies products, which are available on different time scales and through various different automatic and interactive products. To accommodate various time scales a dynamic product evolving while the results of the different technologies are being processed and compiled is envisioned. The product would be available through the Secure Web Portal (SWP). In this presentation we describe implementation of the data fusion functionality in the test framework of the SWP. In addition, we address possible refinements to the already implemented concepts.
Hybrid Histogram Descriptor: A Fusion Feature Representation for Image Retrieval.
Feng, Qinghe; Hao, Qiaohong; Chen, Yuqi; Yi, Yugen; Wei, Ying; Dai, Jiangyan
2018-06-15
Currently, visual sensors are becoming increasingly affordable and fashionable, acceleratingly the increasing number of image data. Image retrieval has attracted increasing interest due to space exploration, industrial, and biomedical applications. Nevertheless, designing effective feature representation is acknowledged as a hard yet fundamental issue. This paper presents a fusion feature representation called a hybrid histogram descriptor (HHD) for image retrieval. The proposed descriptor comprises two histograms jointly: a perceptually uniform histogram which is extracted by exploiting the color and edge orientation information in perceptually uniform regions; and a motif co-occurrence histogram which is acquired by calculating the probability of a pair of motif patterns. To evaluate the performance, we benchmarked the proposed descriptor on RSSCN7, AID, Outex-00013, Outex-00014 and ETHZ-53 datasets. Experimental results suggest that the proposed descriptor is more effective and robust than ten recent fusion-based descriptors under the content-based image retrieval framework. The computational complexity was also analyzed to give an in-depth evaluation. Furthermore, compared with the state-of-the-art convolutional neural network (CNN)-based descriptors, the proposed descriptor also achieves comparable performance, but does not require any training process.
Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.
Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z
2012-07-01
Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.
Implication of Culture: User Roles in Information Fusion for Enhanced Situational Understanding
2009-07-01
situational understanding through assessment of the environment to determine a coherent state of affairs. The information is integrated with knowledge to...Implication of Culture: User Roles in Information Fusion for Enhanced Situational Understanding Erik Blasch Air Force Research Lab 2241...enhanced tacit knowledge understanding by (1) display fusion for data presentation (e.g. cultural segmentation), (2) interactive fusion to allow the
Dual tracer imaging of SPECT and PET probes in living mice using a sequential protocol
Chapman, Sarah E; Diener, Justin M; Sasser, Todd A; Correcher, Carlos; González, Antonio J; Avermaete, Tony Van; Leevy, W Matthew
2012-01-01
Over the past 20 years, multimodal imaging strategies have motivated the fusion of Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT) scans with an X-ray computed tomography (CT) image to provide anatomical information, as well as a framework with which molecular and functional images may be co-registered. Recently, pre-clinical nuclear imaging technology has evolved to capture multiple SPECT or multiple PET tracers to further enhance the information content gathered within an imaging experiment. However, the use of SPECT and PET probes together, in the same animal, has remained a challenge. Here we describe a straightforward method using an integrated trimodal imaging system and a sequential dosing/acquisition protocol to achieve dual tracer imaging with 99mTc and 18F isotopes, along with anatomical CT, on an individual specimen. Dosing and imaging is completed so that minimal animal manipulations are required, full trimodal fusion is conserved, and tracer crosstalk including down-scatter of the PET tracer in SPECT mode is avoided. This technique will enhance the ability of preclinical researchers to detect multiple disease targets and perform functional, molecular, and anatomical imaging on individual specimens to increase the information content gathered within longitudinal in vivo studies. PMID:23145357
Study on the multi-sensors monitoring and information fusion technology of dangerous cargo container
NASA Astrophysics Data System (ADS)
Xu, Shibo; Zhang, Shuhui; Cao, Wensheng
2017-10-01
In this paper, monitoring system of dangerous cargo container based on multi-sensors is presented. In order to improve monitoring accuracy, multi-sensors will be applied inside of dangerous cargo container. Multi-sensors information fusion solution of monitoring dangerous cargo container is put forward, and information pre-processing, the fusion algorithm of homogenous sensors and information fusion based on BP neural network are illustrated, applying multi-sensors in the field of container monitoring has some novelty.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)
NASA Astrophysics Data System (ADS)
Blasch, Erik
2015-06-01
Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.
Probing the fusion of neutron-rich nuclei with re-accelerated radioactive beams
Vadas, J.; Singh, Varinderjit; Wiggins, B. B.; ...
2018-03-27
Here, we report the first measurement of the fusion excitation functions for 39,47K + 28Si at near-barrier energies. Evaporation residues resulting from the fusion process were identified by direct measurement of their energy and time-of-flight with high geometric efficiency. At the lowest incident energy, the cross section measured for the neutron-rich 47K-induced reaction is ≈6 times larger than that of the β-stable system. This experimental approach, both in measurement and in analysis, demonstrates how to efficiently measure fusion with low-intensity re-accelerated radioactive beams, establishing the framework for future studies.
Probing the fusion of neutron-rich nuclei with re-accelerated radioactive beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vadas, J.; Singh, Varinderjit; Wiggins, B. B.
Here, we report the first measurement of the fusion excitation functions for 39,47K + 28Si at near-barrier energies. Evaporation residues resulting from the fusion process were identified by direct measurement of their energy and time-of-flight with high geometric efficiency. At the lowest incident energy, the cross section measured for the neutron-rich 47K-induced reaction is ≈6 times larger than that of the β-stable system. This experimental approach, both in measurement and in analysis, demonstrates how to efficiently measure fusion with low-intensity re-accelerated radioactive beams, establishing the framework for future studies.
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
2D to 3D fusion of echocardiography and cardiac CT for TAVR and TAVI image guidance.
Khalil, Azira; Faisal, Amir; Lai, Khin Wee; Ng, Siew Cheok; Liew, Yih Miin
2017-08-01
This study proposed a registration framework to fuse 2D echocardiography images of the aortic valve with preoperative cardiac CT volume. The registration facilitates the fusion of CT and echocardiography to aid the diagnosis of aortic valve diseases and provide surgical guidance during transcatheter aortic valve replacement and implantation. The image registration framework consists of two major steps: temporal synchronization and spatial registration. Temporal synchronization allows time stamping of echocardiography time series data to identify frames that are at similar cardiac phase as the CT volume. Spatial registration is an intensity-based normalized mutual information method applied with pattern search optimization algorithm to produce an interpolated cardiac CT image that matches the echocardiography image. Our proposed registration method has been applied on the short-axis "Mercedes Benz" sign view of the aortic valve and long-axis parasternal view of echocardiography images from ten patients. The accuracy of our fully automated registration method was 0.81 ± 0.08 and 1.30 ± 0.13 mm in terms of Dice coefficient and Hausdorff distance for short-axis aortic valve view registration, whereas for long-axis parasternal view registration it was 0.79 ± 0.02 and 1.19 ± 0.11 mm, respectively. This accuracy is comparable to gold standard manual registration by expert. There was no significant difference in aortic annulus diameter measurement between the automatically and manually registered CT images. Without the use of optical tracking, we have shown the applicability of this technique for effective fusion of echocardiography with preoperative CT volume to potentially facilitate catheter-based surgery.
Enhancing atlas based segmentation with multiclass linear classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr
Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible localmore » registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.« less
Information Fusion for Situational Awareness
2003-01-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP021704 TITLE: Information Fusion for Situational Awareness DISTRIBUTION...component part numbers comprise the compilation report: ADP021634 thru ADP021736 UNCLASSIFIED Information Fusion for Situational Awareness Dr. John...Situation Assessment, or level 2 be applied to address Situational Awareness - the processing, the knowledge of objects, their goal of this paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xiongbiao, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au; Wan, Ying, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au; He, Xiangjian
Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) asmore » a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.« less
The New Feedback Control System of RFX-mod Based on the MARTe Real-Time Framework
NASA Astrophysics Data System (ADS)
Manduchi, G.; Luchetta, A.; Soppelsa, A.; Taliercio, C.
2014-06-01
A real-time system has been successfully used since 2004 in the RFX-mod nuclear fusion experiment to control the position of the plasma and its Magneto Hydrodynamic (MHD) modes. However, its latency and the limited computation power of the used processors prevented the usage of more aggressive control algorithms. Therefore a new hardware and software architecture has been designed to overcome such limitations and to provide a shorter latency and a much increased computation power. The new system is based on a Linux multi-core server and uses MARTe, a framework for real-time control which is gaining interest in the fusion community.
Probabilistic combination of static and dynamic gait features for verification
NASA Astrophysics Data System (ADS)
Bazin, Alex I.; Nixon, Mark S.
2005-03-01
This paper describes a novel probabilistic framework for biometric identification and data fusion. Based on intra and inter-class variation extracted from training data, posterior probabilities describing the similarity between two feature vectors may be directly calculated from the data using the logistic function and Bayes rule. Using a large publicly available database we show the two imbalanced gait modalities may be fused using this framework. All fusion methods tested provide an improvement over the best modality, with the weighted sum rule giving the best performance, hence showing that highly imbalanced classifiers may be fused in a probabilistic setting; improving not only the performance, but also generalized application capability.
Multilevel depth and image fusion for human activity detection.
Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng
2013-10-01
Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K
2018-06-01
This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699
An innovative thinking-based intelligent information fusion algorithm.
Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
NASA Astrophysics Data System (ADS)
Tang, Xiaojing
Fast and accurate monitoring of tropical forest disturbance is essential for understanding current patterns of deforestation as well as helping eliminate illegal logging. This dissertation explores the use of data from different satellites for near real-time monitoring of forest disturbance in tropical forests, including: development of new monitoring methods; development of new assessment methods; and assessment of the performance and operational readiness of existing methods. Current methods for accuracy assessment of remote sensing products do not address the priority of near real-time monitoring of detecting disturbance events as early as possible. I introduce a new assessment framework for near real-time products that focuses on the timing and the minimum detectable size of disturbance events. The new framework reveals the relationship between change detection accuracy and the time needed to identify events. In regions that are frequently cloudy, near real-time monitoring using data from a single sensor is difficult. This study extends the work by Xin et al. (2013) and develops a new time series method (Fusion2) based on fusion of Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data. Results of three test sites in the Amazon Basin show that Fusion2 can detect 44.4% of the forest disturbance within 13 clear observations (82 days) after the initial disturbance. The smallest event detected by Fusion2 is 6.5 ha. Also, Fusion2 detects disturbance faster and has less commission error than more conventional methods. In a comparison of coarse resolution sensors, MODIS Terra and Aqua combined provides faster and more accurate detection of disturbance events than VIIRS (Visible Infrared Imaging Radiometer Suite) and MODIS single sensor data. The performance of near real-time monitoring using VIIRS is slightly worse than MODIS Terra but significantly better than MODIS Aqua. New monitoring methods developed in this dissertation provide forest protection organizations the capacity to monitor illegal logging events promptly. In the future, combining two Landsat and two Sentinel-2 satellites will provide global coverage at 30 m resolution every 4 days, and routine monitoring may be possible at high resolution. The methods and assessment framework developed in this dissertation are adaptable to newly available datasets.
An Approach to Automated Fusion System Design and Adaptation
Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker
2017-01-01
Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum. PMID:28300762
An Approach to Automated Fusion System Design and Adaptation.
Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker
2017-03-16
Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum.
NASA Astrophysics Data System (ADS)
Emmerman, Philip J.
2005-05-01
Teams of robots or mixed teams of warfighters and robots on reconnaissance and other missions can benefit greatly from a local fusion station. A local fusion station is defined here as a small mobile processor with interfaces to enable the ingestion of multiple heterogeneous sensor data and information streams, including blue force tracking data. These data streams are fused and integrated with contextual information (terrain features, weather, maps, dynamic background features, etc.), and displayed or processed to provide real time situational awareness to the robot controller or to the robots themselves. These blue and red force fusion applications remove redundancies, lessen ambiguities, correlate, aggregate, and integrate sensor information with context such as high resolution terrain. Applications such as safety, team behavior, asset control, training, pattern analysis, etc. can be generated or enhanced by these fusion stations. This local fusion station should also enable the interaction between these local units and a global information world.
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
Dalin, Martin G; Katabi, Nora; Persson, Marta; Lee, Ken-Wing; Makarov, Vladimir; Desrichard, Alexis; Walsh, Logan A; West, Lyndsay; Nadeem, Zaineb; Ramaswami, Deepa; Havel, Jonathan J; Kuo, Fengshen; Chadalavada, Kalyani; Nanjangud, Gouri J; Ganly, Ian; Riaz, Nadeem; Ho, Alan L; Antonescu, Cristina R; Ghossein, Ronald; Stenman, Göran; Chan, Timothy A; Morris, Luc G T
2017-10-30
Myoepithelial carcinoma (MECA) is an aggressive salivary gland cancer with largely unknown genetic features. Here we comprehensively analyze molecular alterations in 40 MECAs using integrated genomic analyses. We identify a low mutational load, and high prevalence (70%) of oncogenic gene fusions. Most fusions involve the PLAG1 oncogene, which is associated with PLAG1 overexpression. We find FGFR1-PLAG1 in seven (18%) cases, and the novel TGFBR3-PLAG1 fusion in six (15%) cases. TGFBR3-PLAG1 promotes a tumorigenic phenotype in vitro, and is absent in 723 other salivary gland tumors. Other novel PLAG1 fusions include ND4-PLAG1; a fusion between mitochondrial and nuclear DNA. We also identify higher number of copy number alterations as a risk factor for recurrence, independent of tumor stage at diagnosis. Our findings indicate that MECA is a fusion-driven disease, nominate TGFBR3-PLAG1 as a hallmark of MECA, and provide a framework for future diagnostic and therapeutic research in this lethal cancer.
Processing and Fusion of Electro-Optic Information
2001-04-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010886 TITLE: Processing and Fusion of Electro - Optic Information...component part numbers comprise the compilation report: ADP010865 thru ADP010894 UNCLASSIFIED 21-1 Processing and Fusion of Electro - Optic Information I...additional electro - optic (EO) sensor model within OOPSDG. It describes TM IT TT T T T performance estimates found prior to producing the New Ne- New
Comparing fusion techniques for the ImageCLEF 2013 medical case retrieval task.
G Seco de Herrera, Alba; Schaer, Roger; Markonis, Dimitrios; Müller, Henning
2015-01-01
Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task. Copyright © 2014 Elsevier Ltd. All rights reserved.
Study of impurity effects on CFETR steady-state scenario by self-consistent integrated modeling
NASA Astrophysics Data System (ADS)
Shi, Nan; Chan, Vincent S.; Jian, Xiang; Li, Guoqiang; Chen, Jiale; Gao, Xiang; Shi, Shengyu; Kong, Defeng; Liu, Xiaoju; Mao, Shifeng; Xu, Guoliang
2017-12-01
Impurity effects on fusion performance of China fusion engineering test reactor (CFETR) due to extrinsic seeding are investigated. An integrated 1.5D modeling workflow evolves plasma equilibrium and all transport channels to steady state. The one modeling framework for integrated tasks framework is used to couple the transport solver, MHD equilibrium solver, and source and sink calculations. A self-consistent impurity profile constructed using a steady-state background plasma, which satisfies quasi-neutrality and true steady state, is presented for the first time. Studies are performed based on an optimized fully non-inductive scenario with varying concentrations of Argon (Ar) seeding. It is found that fusion performance improves before dropping off with increasing {{Z}\\text{eff}} , while the confinement remains at high level. Further analysis of transport for these plasmas shows that low-k ion temperature gradient modes dominate the turbulence. The decrease in linear growth rate and resultant fluxes of all channels with increasing {{Z}\\text{eff}} can be traced to impurity profile change by transport. The improvement in confinement levels off at higher {{Z}\\text{eff}} . Over the regime of study there is a competition between the suppressed transport and increasing radiation that leads to a peak in the fusion performance at {{Z}\\text{eff}} (~2.78 for CFETR). Extrinsic impurity seeding to control divertor heat load will need to be optimized around this value for best fusion performance.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion
Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B.; Bono, Christopher M.
2015-01-01
Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity (“low back fusion,” “lumbar fusion,” and “lumbar arthrodesis”), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care. PMID:26933614
The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion.
Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B; Bono, Christopher M
2016-03-01
Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity ("low back fusion," "lumbar fusion," and "lumbar arthrodesis"), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care.
Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A
2017-09-01
Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.
Bayesian data fusion for spatial prediction of categorical variables in environmental sciences
NASA Astrophysics Data System (ADS)
Gengler, Sarah; Bogaert, Patrick
2014-12-01
First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.
Lappala, Anna; Nishima, Wataru; Miner, Jacob; Fenimore, Paul; Fischer, Will; Hraber, Peter; Zhang, Ming; McMahon, Benjamin; Tung, Chang-Shung
2018-05-10
Membrane fusion proteins are responsible for viral entry into host cells—a crucial first step in viral infection. These proteins undergo large conformational changes from pre-fusion to fusion-initiation structures, and, despite differences in viral genomes and disease etiology, many fusion proteins are arranged as trimers. Structural information for both pre-fusion and fusion-initiation states is critical for understanding virus neutralization by the host immune system. In the case of Ebola virus glycoprotein (EBOV GP) and Zika virus envelope protein (ZIKV E), pre-fusion state structures have been identified experimentally, but only partial structures of fusion-initiation states have been described. While the fusion-initiation structure is in an energetically unfavorable state that is difficult to solve experimentally, the existing structural information combined with computational approaches enabled the modeling of fusion-initiation state structures of both proteins. These structural models provide an improved understanding of four different neutralizing antibodies in the prevention of viral host entry.
Semiotic foundation for multisensor-multilook fusion
NASA Astrophysics Data System (ADS)
Myler, Harley R.
1998-07-01
This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.
Advances in Multi-Sensor Information Fusion: Theory and Applications 2017.
Jin, Xue-Bo; Sun, Shuli; Wei, Hong; Yang, Feng-Bao
2018-04-11
The information fusion technique can integrate a large amount of data and knowledge representing the same real-world object and obtain a consistent, accurate, and useful representation of that object. The data may be independent or redundant, and can be obtained by different sensors at the same time or at different times. A suitable combination of investigative methods can substantially increase the profit of information in comparison with that from a single sensor. Multi-sensor information fusion has been a key issue in sensor research since the 1970s, and it has been applied in many fields. For example, manufacturing and process control industries can generate a lot of data, which have real, actionable business value. The fusion of these data can greatly improve productivity through digitization. The goal of this special issue is to report innovative ideas and solutions for multi-sensor information fusion in the emerging applications era, focusing on development, adoption, and applications.
A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.
Ligorio, Gabriele; Sabatini, Angelo Maria
2015-12-19
In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.
NASA Astrophysics Data System (ADS)
Blasch, Erik; Kadar, Ivan; Hintz, Kenneth; Biermann, Joachim; Chong, Chee-Yee; Salerno, John; Das, Subrata
2007-04-01
Resource management (or process refinement) is critical for information fusion operations in that users, sensors, and platforms need to be informed, based on mission needs, on how to collect, process, and exploit data. To meet these growing concerns, a panel session was conducted at the International Society of Information Fusion Conference in 2006 to discuss the various issues surrounding the interaction of Resource Management with Level 2/3 Situation and Threat Assessment. This paper briefly consolidates the discussion of the invited panel panelists. The common themes include: (1) Addressing the user in system management, sensor control, and knowledge based information collection (2) Determining a standard set of fusion metrics for optimization and evaluation based on the application (3) Allowing dynamic and adaptive updating to deliver timely information needs and information rates (4) Optimizing the joint objective functions at all information fusion levels based on decision-theoretic analysis (5) Providing constraints from distributed resource mission planning and scheduling; and (6) Defining L2/3 situation entity definitions for knowledge discovery, modeling, and information projection
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
Data fusion approach to threat assessment for radar resources management
NASA Astrophysics Data System (ADS)
Komorniczak, Wojciech; Pietrasinski, Jerzy; Solaiman, Basel
2002-03-01
The paper deals with the problem of the multifunction radar resources management. The problem consists of target/tasks ranking and tasks scheduling. The paper is focused on the target ranking, with the data fusion approach. The data from the radar (object's velocity, range, altitude, direction etc.), IFF system (Identification Friend or Foe) and ESM system (Electronic Support Measures - information concerning threat's electro - magnetic activities) is used to decide of the importance assignment for each detected target. The main problem consists of the multiplicity of various types of the input information. The information from the radar is of the probabilistic or ambiguous imperfection type and the IFF information is of evidential type. To take the advantage of these information sources the advanced data fusion system is necessary. The system should deal with the following situations: fusion of the evidential and fuzzy information, fusion of the evidential information and a'priori information. The paper describes the system which fuses the fuzzy and the evidential information without previous change to the same type of information. It is also described the proposal of using of the dynamic fuzzy qualifiers. The paper shows the results of the preliminary system's tests.
Analysis of the Subunit Stoichiometries in Viral Entry
Magnus, Carsten; Regoes, Roland R.
2012-01-01
Virions of the Human Immunodeficiency Virus (HIV) infect cells by first attaching with their surface spikes to the CD4 receptor on target cells. This leads to conformational changes in the viral spikes, enabling the virus to engage a coreceptor, commonly CCR5 or CXCR4, and consecutively to insert the fusion peptide into the cellular membrane. Finally, the viral and the cellular membranes fuse. The HIV spike is a trimer consisting of three identical heterodimers composed of the gp120 and gp41 envelope proteins. Each of the gp120 proteins in the trimer is capable of attaching to the CD4 receptor and the coreceptor, and each of the three gp41 units harbors a fusion domain. It is still under debate how many of the envelope subunits within a given trimer have to bind to the CD4 receptors and to the coreceptors, and how many gp41 protein fusion domains are required for fusion. These numbers are referred to as subunit stoichiometries. We present a mathematical framework for estimating these parameters individually by analyzing infectivity assays with pseudotyped viruses. We find that the number of spikes that are engaged in mediating cell entry and the distribution of the spike number play important roles for the estimation of the subunit stoichiometries. Our model framework also shows why it is important to subdivide the question of the number of functional subunits within one trimer into the three different subunit stoichiometries. In a second step, we extend our models to study whether the subunits within one trimer cooperate during receptor binding and fusion. As an example for how our models can be applied, we reanalyze a data set on subunit stoichiometries. We find that two envelope proteins have to engage with CD4-receptors and coreceptors and that two fusion proteins must be revealed within one trimer for viral entry. Our study is motivated by the mechanism of HIV entry but the experimental technique and the model framework can be extended to other viral systems as well. PMID:22479399
Fuselets: an agent based architecture for fusion of heterogeneous information and data
NASA Astrophysics Data System (ADS)
Beyerer, Jürgen; Heizmann, Michael; Sander, Jennifer
2006-04-01
A new architecture for fusing information and data from heterogeneous sources is proposed. The approach takes criminalistics as a model. In analogy to the work of detectives, who attempt to investigate crimes, software agents are initiated that pursue clues and try to consolidate or to dismiss hypotheses. Like their human pendants, they can, if questions beyond their competences arise, consult expert agents. Within the context of a certain task, region, and time interval, specialized operations are applied to each relevant information source, e.g. IMINT, SIGINT, ACINT,..., HUMINT, data bases etc. in order to establish hit lists of first clues. Each clue is described by its pertaining facts, uncertainties, and dependencies in form of a local degree-of-belief (DoB) distribution in a Bayesian sense. For each clue an agent is initiated which cooperates with other agents and experts. Expert agents support to make use of different information sources. Consultations of experts, capable to access certain information sources, result in changes of the DoB of the pertaining clue. According to the significance of concentration of their DoB distribution clues are abandoned or pursued further to formulate task specific hypotheses. Communications between the agents serve to find out whether different clues belong to the same cause and thus can be put together. At the end of the investigation process, the different hypotheses are evaluated by a jury and a final report is created that constitutes the fusion result. The approach proposed avoids calculating global DoB distributions by adopting a local Bayesian approximation and thus reduces the complexity of the exact problem essentially. Different information sources are transformed into DoB distributions using the maximum entropy paradigm and considering known facts as constraints. Nominal, ordinal and cardinal quantities can be treated within this framework equally. The architecture is scalable by tailoring the number of agents according to the available computer resources, to the priority of tasks, and to the maximum duration of the fusion process. Furthermore, the architecture allows cooperative work of human and automated agents and experts, as long as not all subtasks can be accomplished automatically.
NASA Astrophysics Data System (ADS)
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Information Fusion of Conflicting Input Data.
Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael
2016-10-29
Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μ BalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible.
Information Fusion of Conflicting Input Data
Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael
2016-01-01
Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible. PMID:27801874
NASA Astrophysics Data System (ADS)
Blasch, Erik; Kadar, Ivan; Grewe, Lynne L.; Brooks, Richard; Yu, Wei; Kwasinski, Andres; Thomopoulos, Stelios; Salerno, John; Qi, Hairong
2017-05-01
During the 2016 SPIE DSS conference, nine panelists were invited to highlight the trends and opportunities in cyber-physical systems (CPS) and Internet of Things (IoT) with information fusion. The world will be ubiquitously outfitted with many sensors to support our daily living thorough the Internet of Things (IoT), manage infrastructure developments with cyber-physical systems (CPS), as well as provide communication through networked information fusion technology over the internet (NIFTI). This paper summarizes the panel discussions on opportunities of information fusion to the growing trends in CPS and IoT. The summary includes the concepts and areas where information supports these CPS/IoT which includes situation awareness, transportation, and smart grids.
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2009-04-01
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
Low-energy nuclear reaction of the 14N+169Tm system: Incomplete fusion
NASA Astrophysics Data System (ADS)
Kumar, R.; Sharma, Vijay R.; Yadav, Abhishek; Singh, Pushpendra P.; Agarwal, Avinash; Appannababu, S.; Mukherjee, S.; Singh, B. P.; Ali, R.; Bhowmik, R. K.
2017-11-01
Excitation functions of reaction residues produced in the 14N+169Tm system have been measured to high precision at energies above the fusion barrier, ranging from 1.04 VB to 1.30 VB , and analyzed in the framework of the statistical model code pace4. Analysis of α -emitting channels points toward the onset of incomplete fusion even at slightly above-barrier energies where complete fusion is supposed to be one of the dominant processes. The onset and strength of incomplete fusion have been deduced and studied in terms of various entrance channel parameters. Present results together with the reanalysis of existing data for various projectile-target combinations conclusively suggest strong influence of projectile structure on the onset of incomplete fusion. Also, a strong dependence on the Coulomb effect (ZPZT) has been observed for the present system along with different projectile-target combinations available in the literature. It is concluded that the fraction of incomplete fusion linearly increases with ZPZT and is found to be more for larger ZPZT values, indicating significantly important linear systematics.
Green fluorescence protein-based content-mixing assay of SNARE-driven membrane fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heo, Paul; Kong, Byoungjae; Jung, Young-Hun
Soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) proteins mediate intracellular membrane fusion by forming a ternary SNARE complex. A minimalist approach utilizing proteoliposomes with reconstituted SNARE proteins yielded a wealth of information pinpointing the molecular mechanism of SNARE-mediated fusion and its regulation by accessory proteins. Two important attributes of a membrane fusion are lipid-mixing and the formation of an aqueous passage between apposing membranes. These two attributes are typically observed by using various fluorescent dyes. Currently available in vitro assay systems for observing fusion pore opening have several weaknesses such as cargo-bleeding, incomplete removal of unencapsulated dyes, and inadequate information regardingmore » the size of the fusion pore, limiting measurements of the final stage of membrane fusion. In the present study, we used a biotinylated green fluorescence protein and streptavidin conjugated with Dylight 594 (DyStrp) as a Föster resonance energy transfer (FRET) donor and acceptor, respectively. This FRET pair encapsulated in each v-vesicle containing synaptobrevin and t-vesicle containing a binary acceptor complex of syntaxin 1a and synaptosomal-associated protein 25 revealed the opening of a large fusion pore of more than 5 nm, without the unwanted signals from unencapsulated dyes or leakage. This system enabled determination of the stoichiometry of the merging vesicles because the FRET efficiency of the FRET pair depended on the molar ratio between dyes. Here, we report a robust and informative assay for SNARE-mediated fusion pore opening. - Highlights: • SNARE proteins drive membrane fusion and open a pore for cargo release. • Biotinylated GFP and DyStrp was used as the reporter pair of fusion pore opening. • Procedure for efficient SNARE reconstitution and reporter encapsulation was established. • The FRET pair reported opening of a large fusion pore bigger than 5 nm. • The assay was robust and provided information of stoichiometry of vesicle fusion.« less
Design of a large magnetic-bearing turbomolecular pump for NET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernhardt, K.H.; Conrad, A.; Dinner, P.J.
1988-09-01
The feasibility of development of large vacuum components for operation in fusion machines have been investigated in the framework of the European Fusion Technology Programme. The requirements and the results of the feasibility study for the large turbomolecular pump units (TMP) are presented. Design parameters for single flow 50.000 l/s TMP and a double flow 15.000 and a 50.000 l/s TMP have been compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timofeev, Andrey V.; Egorov, Dmitry V.
This paper presents new results concerning selection of an optimal information fusion formula for an ensemble of Lipschitz classifiers. The goal of information fusion is to create an integral classificatory which could provide better generalization ability of the ensemble while achieving a practically acceptable level of effectiveness. The problem of information fusion is very relevant for data processing in multi-channel C-OTDR-monitoring systems. In this case we have to effectively classify targeted events which appear in the vicinity of the monitored object. Solution of this problem is based on usage of an ensemble of Lipschitz classifiers each of which corresponds tomore » a respective channel. We suggest a brand new method for information fusion in case of ensemble of Lipschitz classifiers. This method is called “The Weighing of Inversely as Lipschitz Constants” (WILC). Results of WILC-method practical usage in multichannel C-OTDR monitoring systems are presented.« less
2016-01-01
planning exercises and wargaming. Initial Experimentation Late in the research , the prototype platform and the various fusion methods came together. This...Chapter Four points to prior research 2 Uncertainty-Sensitive Heterogeneous Information Fusion in mind multimethod fusing of complex information...our research is assessing the threat of terrorism posed by individuals or groups under scrutiny. Broadly, the ultimate objec- tives, which go well
Unified Research on Network-Based Hard/Soft Information Fusion
2016-02-02
types). There are a number of search tree run parameters which must be set depending on the experimental setting. A pilot study was run to identify...Unlimited Final Report: Unified Research on Network-Based Hard/Soft Information Fusion The views, opinions and/or findings contained in this report...Final Report: Unified Research on Network-Based Hard/Soft Information Fusion Report Title The University at Buffalo (UB) Center for Multisource
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
The exocytotic fusion pore modeled as a lipidic pore.
Nanavati, C; Markin, V S; Oberhauser, A F; Fernandez, J M
1992-01-01
Freeze-fracture electron micrographs from degranulating cells show that the lumen of the secretory granule is connected to the extracellular compartment via large (20 to 150 nm diameter) aqueous pores. These exocytotic fusion pores appear to be made up of a highly curved bilayer that spans the plasma and granule membranes. Conductance measurements, using the patch-clamp technique, have been used to study the fusion pore from the instant it conducts ions. These measurements reveal the presence of early fusion pores that are much smaller than those observed in electron micrographs. Early fusion pores open abruptly, fluctuate, and then either expand irreversibly or close. The molecular structure of these early fusion pores is unknown. In the simplest extremes, these early fusion pores could be either ion channel like protein pores or lipidic pores. Here, we explored the latter possibility, namely that of the early exocytotic fusion pore modeled as a lipid-lined pore whose free energy was composed of curvature elastic energy and work done by tension. Like early exocytotic fusion pores, we found that these lipidic pores could open abruptly, fluctuate, and expand irreversibly. Closure of these lipidic pores could be caused by slight changes in lipid composition. Conductance distributions for stable lipidic pores matched those of exocytotic fusion pores. These findings demonstrate that lipidic pores can exhibit the properties of exocytotic fusion pores, thus providing an alternate framework with which to understand and interpret exocytotic fusion pore data. PMID:1420930
Revisions to the JDL data fusion model
NASA Astrophysics Data System (ADS)
Steinberg, Alan N.; Bowman, Christopher L.; White, Franklin E.
1999-03-01
The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.
2010-07-01
Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania State University (PSU), Iona College (Iona), and Tennessee State...License. 14. ABSTRACT The University at Buffalo (UB) Center for Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania...of CMIF current research on methods for Test and Evaluation ([7], [8]) involving for example large- factor-space experimental design techniques ([9
Vehicle logo recognition using multi-level fusion model
NASA Astrophysics Data System (ADS)
Ming, Wei; Xiao, Jianli
2018-04-01
Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.
Remote Sensing Data Fusion to Detect Illicit Crops and Unauthorized Airstrips
NASA Astrophysics Data System (ADS)
Pena, J. A.; Yumin, T.; Liu, H.; Zhao, B.; Garcia, J. A.; Pinto, J.
2018-04-01
Remote sensing data fusion has been playing a more and more important role in crop planting area monitoring, especially for crop area information acquisition. Multi-temporal data and multi-spectral time series are two major aspects for improving crop identification accuracy. Remote sensing fusion provides high quality multi-spectral and panchromatic images in terms of spectral and spatial information, respectively. In this paper, we take one step further and prove the application of remote sensing data fusion in detecting illicit crop through LSMM, GOBIA, and MCE analyzing of strategic information. This methodology emerges as a complementary and effective strategy to control and eradicate illicit crops.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Selected Tracking and Fusion Applications for the Defence and Security Domain
2010-05-01
SUBTITLE Selected Tracking and Fusion Applications for the Defence and Security Domain 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...characterized, for example, by sensor ranges from less than a meter to hundreds of kilometers, by time scales ranging from less than second to a few...been carried out within the framework of a multinational technology program called MAJIIC (Multi-Sensor Aerospace-Ground Joint ISR Interoperability
A general framework to learn surrogate relevance criterion for atlas based image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Tingting; Ruan, Dan
2016-09-01
Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates.
Toward a first-principles integrated simulation of tokamak edge plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C S; Klasky, Scott A; Cummings, Julian
2008-01-01
Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less
NASA Astrophysics Data System (ADS)
Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd
2009-05-01
Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.
Recognition of pornographic web pages by classifying texts and images.
Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve
2007-06-01
With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.
NASA Technical Reports Server (NTRS)
Freeman, Anthony; Dubois, Pascale; Leberl, Franz; Norikane, L.; Way, Jobea
1991-01-01
Viewgraphs on Geographic Information System for fusion and analysis of high-resolution remote sensing and ground truth data are presented. Topics covered include: scientific objectives; schedule; and Geographic Information System.
Brännmark, Johan
2017-07-01
Autonomy and consent have been central values in Western moral and political thought for centuries. One way of understanding the bioethical models that started to develop, especially in the 1970s, is that they were about the fusion of a long-standing professional ethics with the core values underpinning modern political institutions. That there was a need for this kind of fusion is difficult to dispute, especially since the provision of health care has in most developed countries become an ever more important concern of our political institutions, with governments playing a significant role in regulating and facilitating the provision of health care and in many countries even largely organizing it. There is, nevertheless, still room for dispute about how best to achieve this fusion and how to best think about autonomy and consent in a biomedical context. The simplest model we can have is probably about how being a person is largely about having the capacity of autonomous choice and that the main mode through which we exercise autonomy is by providing informed consent. Yet, liberal democracy's core idea that human beings have a high and equal value is also found in other accounts of the person. The human-rights framework provides an alternative model for thinking about personhood and about patient care. The human-rights approach is grounded, not in an account of autonomy (although it has something to say about autonomy), but in an account of the moral and political personhood that people possess merely by being human beings. In this approach, values like dignity and integrity, both highly relevant in a bioethical context, are identified as distinct values rather than being derived from and therefore reduced to respect for autonomous choice. The human-rights approach can supplement the problematic notion of autonomy that has been central to bioethics by placing this notion in a broader, strongly pluralistic framework. © 2017 The Hastings Center.
2008-03-01
amount of arriving data, extract actionable information, and integrate it with prior knowledge. Add to that the pressures of today’s fusion center...information, and integrate it with prior knowledge. Add to that the pressures of today’s fusion center climate and it becomes clear that analysts, police... fusion centers, including specifics about how these problems manifest at the Illinois State Police (ISP) Statewide Terrorism and Intelligence Center
Verma, Gyanendra K; Tiwary, Uma Shanker
2014-11-15
The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.
Inverse scattering approach to improving pattern recognition
NASA Astrophysics Data System (ADS)
Chapline, George; Fu, Chi-Yung
2005-05-01
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the "wake-sleep" algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensory feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.
Inverse Scattering Approach to Improving Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapline, G; Fu, C
2005-02-15
The Helmholtz machine provides what may be the best existing model for how the mammalian brain recognizes patterns. Based on the observation that the ''wake-sleep'' algorithm for training a Helmholtz machine is similar to the problem of finding the potential for a multi-channel Schrodinger equation, we propose that the construction of a Schrodinger potential using inverse scattering methods can serve as a model for how the mammalian brain learns to extract essential information from sensory data. In particular, inverse scattering theory provides a conceptual framework for imagining how one might use EEG and MEG observations of brain-waves together with sensorymore » feedback to improve human learning and pattern recognition. Longer term, implementation of inverse scattering algorithms on a digital or optical computer could be a step towards mimicking the seamless information fusion of the mammalian brain.« less
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S
We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less
NASA Astrophysics Data System (ADS)
Wasserman, Richard Marc
The radiation therapy treatment planning (RTTP) process may be subdivided into three planning stages: gross tumor delineation, clinical target delineation, and modality dependent target definition. The research presented will focus on the first two planning tasks. A gross tumor target delineation methodology is proposed which focuses on the integration of MRI, CT, and PET imaging data towards the generation of a mathematically optimal tumor boundary. The solution to this problem is formulated within a framework integrating concepts from the fields of deformable modelling, region growing, fuzzy logic, and data fusion. The resulting fuzzy fusion algorithm can integrate both edge and region information from multiple medical modalities to delineate optimal regions of pathological tissue content. The subclinical boundaries of an infiltrating neoplasm cannot be determined explicitly via traditional imaging methods and are often defined to extend a fixed distance from the gross tumor boundary. In order to improve the clinical target definition process an estimation technique is proposed via which tumor growth may be modelled and subclinical growth predicted. An in vivo, macroscopic primary brain tumor growth model is presented, which may be fit to each patient undergoing treatment, allowing for the prediction of future growth and consequently the ability to estimate subclinical local invasion. Additionally, the patient specific in vivo tumor model will be of significant utility in multiple diagnostic clinical applications.
Mehranfar, Adele; Ghadiri, Nasser; Kouhsar, Morteza; Golshani, Ashkan
2017-09-01
Detecting the protein complexes is an important task in analyzing the protein interaction networks. Although many algorithms predict protein complexes in different ways, surveys on the interaction networks indicate that about 50% of detected interactions are false positives. Consequently, the accuracy of existing methods needs to be improved. In this paper we propose a novel algorithm to detect the protein complexes in 'noisy' protein interaction data. First, we integrate several biological data sources to determine the reliability of each interaction and determine more accurate weights for the interactions. A data fusion component is used for this step, based on the interval type-2 fuzzy voter that provides an efficient combination of the information sources. This fusion component detects the errors and diminishes their effect on the detection protein complexes. So in the first step, the reliability scores have been assigned for every interaction in the network. In the second step, we have proposed a general protein complex detection algorithm by exploiting and adopting the strong points of other algorithms and existing hypotheses regarding real complexes. Finally, the proposed method has been applied for the yeast interaction datasets for predicting the interactions. The results show that our framework has a better performance regarding precision and F-measure than the existing approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.
An automatic fall detection framework using data fusion of Doppler radar and motion sensor network.
Liu, Liang; Popescu, Mihail; Skubic, Marjorie; Rantz, Marilyn
2014-01-01
This paper describes the ongoing work of detecting falls in independent living senior apartments. We have developed a fall detection system with Doppler radar sensor and implemented ceiling radar in real senior apartments. However, the detection accuracy on real world data is affected by false alarms inherent in the real living environment, such as motions from visitors. To solve this issue, this paper proposes an improved framework by fusing the Doppler radar sensor result with a motion sensor network. As a result, performance is significantly improved after the data fusion by discarding the false alarms generated by visitors. The improvement of this new method is tested on one week of continuous data from an actual elderly person who frequently falls while living in her senior home.
Long-range dismount activity classification: LODAC
NASA Astrophysics Data System (ADS)
Garagic, Denis; Peskoe, Jacob; Liu, Fang; Cuevas, Manuel; Freeman, Andrew M.; Rhodes, Bradley J.
2014-06-01
Continuous classification of dismount types (including gender, age, ethnicity) and their activities (such as walking, running) evolving over space and time is challenging. Limited sensor resolution (often exacerbated as a function of platform standoff distance) and clutter from shadows in dense target environments, unfavorable environmental conditions, and the normal properties of real data all contribute to the challenge. The unique and innovative aspect of our approach is a synthesis of multimodal signal processing with incremental non-parametric, hierarchical Bayesian machine learning methods to create a new kind of target classification architecture. This architecture is designed from the ground up to optimally exploit correlations among the multiple sensing modalities (multimodal data fusion) and rapidly and continuously learns (online self-tuning) patterns of distinct classes of dismounts given little a priori information. This increases classification performance in the presence of challenges posed by anti-access/area denial (A2/AD) sensing. To fuse multimodal features, Long-range Dismount Activity Classification (LODAC) develops a novel statistical information theoretic approach for multimodal data fusion that jointly models multimodal data (i.e., a probabilistic model for cross-modal signal generation) and discovers the critical cross-modal correlations by identifying components (features) with maximal mutual information (MI) which is efficiently estimated using non-parametric entropy models. LODAC develops a generic probabilistic pattern learning and classification framework based on a new class of hierarchical Bayesian learning algorithms for efficiently discovering recurring patterns (classes of dismounts) in multiple simultaneous time series (sensor modalities) at multiple levels of feature granularity.
Ontological Issues in Higher Levels of Information Fusion: User Refinement of the Fusion Process
2003-01-01
fusion question, the thing that is separates the Greek We explore the higher-level purpose offusion systems by philosophical questions and modem day...the The Greeks focused on both data fusion and the Fusion02 conference there are common fusion questions philosophical questions of an ontology - the...data World of Visible Things Belief (pistis) fusion - user refinement. The rest of the paper is as Appearances follows: Section 2 details the Greek
Jiao, Pengfei; Cai, Fei; Feng, Yiding; Wang, Wenjun
2017-08-21
Link predication aims at forecasting the latent or unobserved edges in the complex networks and has a wide range of applications in reality. Almost existing methods and models only take advantage of one class organization of the networks, which always lose important information hidden in other organizations of the network. In this paper, we propose a link predication framework which makes the best of the structure of networks in different level of organizations based on nonnegative matrix factorization, which is called NMF 3 here. We first map the observed network into another space by kernel functions, which could get the different order organizations. Then we combine the adjacency matrix of the network with one of other organizations, which makes us obtain the objective function of our framework for link predication based on the nonnegative matrix factorization. Third, we derive an iterative algorithm to optimize the objective function, which converges to a local optimum, and we propose a fast optimization strategy for large networks. Lastly, we test the proposed framework based on two kernel functions on a series of real world networks under different sizes of training set, and the experimental results show the feasibility, effectiveness, and competitiveness of the proposed framework.
A framework of multitemplate ensemble for fingerprint verification
NASA Astrophysics Data System (ADS)
Yin, Yilong; Ning, Yanbin; Ren, Chunxiao; Liu, Li
2012-12-01
How to improve performance of an automatic fingerprint verification system (AFVS) is always a big challenge in biometric verification field. Recently, it becomes popular to improve the performance of AFVS using ensemble learning approach to fuse related information of fingerprints. In this article, we propose a novel framework of fingerprint verification which is based on the multitemplate ensemble method. This framework is consisted of three stages. In the first stage, enrollment stage, we adopt an effective template selection method to select those fingerprints which best represent a finger, and then, a polyhedron is created by the matching results of multiple template fingerprints and a virtual centroid of the polyhedron is given. In the second stage, verification stage, we measure the distance between the centroid of the polyhedron and a query image. In the final stage, a fusion rule is used to choose a proper distance from a distance set. The experimental results on the FVC2004 database prove the improvement on the effectiveness of the new framework in fingerprint verification. With a minutiae-based matching method, the average EER of four databases in FVC2004 drops from 10.85 to 0.88, and with a ridge-based matching method, the average EER of these four databases also decreases from 14.58 to 2.51.
Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.
Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing
2012-04-01
This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
Information fusion based optimal control for large civil aircraft system.
Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen
2015-03-01
Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Enhanced chemical weapon warning via sensor fusion
NASA Astrophysics Data System (ADS)
Flaherty, Michael; Pritchett, Daniel; Cothren, Brian; Schwaiger, James
2011-05-01
Torch Technologies Inc., is actively involved in chemical sensor networking and data fusion via multi-year efforts with Dugway Proving Ground (DPG) and the Defense Threat Reduction Agency (DTRA). The objective of these efforts is to develop innovative concepts and advanced algorithms that enhance our national Chemical Warfare (CW) test and warning capabilities via the fusion of traditional and non-traditional CW sensor data. Under Phase I, II, and III Small Business Innovative Research (SBIR) contracts with DPG, Torch developed the Advanced Chemical Release Evaluation System (ACRES) software to support non real-time CW sensor data fusion. Under Phase I and II SBIRs with DTRA in conjunction with the Edgewood Chemical Biological Center (ECBC), Torch is using the DPG ACRES CW sensor data fuser as a framework from which to develop the Cloud state Estimation in a Networked Sensor Environment (CENSE) data fusion system. Torch is currently developing CENSE to implement and test innovative real-time sensor network based data fusion concepts using CW and non-CW ancillary sensor data to improve CW warning and detection in tactical scenarios.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
Achieving Fairness in Data Fusion Performance Evaluation Development
2006-04-30
display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2.REPORT TYPE 13. DATES...for and evaluation (T&E) organization is how to affordably achieve fairness in the application of its PE systems. Our PE framework provides a ...as a fusion system design balances probability of mission success with complexity/cost). The key issue for and evaluation (T&E) organization is how to
2014-09-01
Hispanic, street, and outlaw motorcycle gang activity in the Commonwealth. The VFC manages the suspicious activity reporting (SAR) initiative...Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington... management . This thesis examined three case studies of fusion center disaster responses through a collaborative-based analytical framework. The resulting
An Analytical Framework for Soft and Hard Data Fusion: A Dempster-Shafer Belief Theoretic Approach
2012-08-01
fusion. Therefore, we provide a detailed discussion on uncertain data types, their origins and three uncertainty pro- cessing formalisms that are popular...suitable membership functions corresponding to the fuzzy sets. 3.2.3 DS Theory The DS belief theory, originally proposed by Dempster, can be thought of as... originated and various imperfections of the source. Uncertainty handling formalisms provide techniques for modeling and working with these uncertain data types
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A.; Grote, D. P.; Vay, J. L.
2015-05-29
The Fusion Energy Sciences Advisory Committee’s subcommittee on non-fusion applications (FESAC NFA) is conducting a survey to obtain information from the fusion community about non-fusion work that has resulted from their DOE-funded fusion research. The subcommittee has requested that members of the community describe recent developments connected to the activities of the DOE Office of Fusion Energy Sciences. Two questions in particular were posed by the subcommittee. This document contains the authors’ responses to those questions.
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase
Lu, Kelin; Zhou, Rui
2016-01-01
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883
Spatial Statistical Data Fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Nguyen, Hai
2010-01-01
Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.
Lu, Kelin; Zhou, Rui
2016-08-15
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.
Stochastic correlative firing for figure-ground segregation.
Chen, Zhe
2005-03-01
Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.
Infrared and visible image fusion based on total variation and augmented Lagrangian.
Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi
2017-11-01
This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
A Knowledge-Based Approach to Information Fusion for the Support of Military Intelligence
2004-03-01
and most reliable an appropriate picture of the battlespace. The presented approach of knowledge based information fusion is focussing on the...incomplete and imperfect information of military reports and background knowledge can be supported substantially in an automated system. Keywords
NASA Astrophysics Data System (ADS)
Zheng, Pai; wang, Honghui; Sang, Zhiqian; Zhong, Ray Y.; Liu, Yongkui; Liu, Chao; Mubarok, Khamdi; Yu, Shiqiang; Xu, Xun
2018-06-01
Information and communication technology is undergoing rapid development, and many disruptive technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, have emerged. These technologies are permeating the manufacturing industry and enable the fusion of physical and virtual worlds through cyber-physical systems (CPS), which mark the advent of the fourth stage of industrial production (i.e., Industry 4.0). The widespread application of CPS in manufacturing environments renders manufacturing systems increasingly smart. To advance research on the implementation of Industry 4.0, this study examines smart manufacturing systems for Industry 4.0. First, a conceptual framework of smart manufacturing systems for Industry 4.0 is presented. Second, demonstrative scenarios that pertain to smart design, smart machining, smart control, smart monitoring, and smart scheduling, are presented. Key technologies and their possible applications to Industry 4.0 smart manufacturing systems are reviewed based on these demonstrative scenarios. Finally, challenges and future perspectives are identified and discussed.
Pansharpening on the Narrow Vnir and SWIR Spectral Bands of SENTINEL-2
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.; Karantzalos, K.
2016-06-01
In this paper results from the evaluation of several state-of-the-art pansharpening techniques are presented for the VNIR and SWIR bands of Sentinel-2. A procedure for the pansharpening is also proposed which aims at respecting the closest spectral similarities between the higher and lower resolution bands. The evaluation included 21 different fusion algorithms and three evaluation frameworks based both on standard quantitative image similarity indexes and qualitative evaluation from remote sensing experts. The overall analysis of the evaluation results indicated that remote sensing experts disagreed with the outcomes and method ranking from the quantitative assessment. The employed image quality similarity indexes and quantitative evaluation framework based on both high and reduced resolution data from the literature didn't manage to highlight/evaluate mainly the spatial information that was injected to the lower resolution images. Regarding the SWIR bands none of the methods managed to deliver significantly better results than a standard bicubic interpolation on the original low resolution bands.
One decade of the Data Fusion Information Group (DFIG) model
NASA Astrophysics Data System (ADS)
Blasch, Erik
2015-05-01
The revision of the Joint Directors of the Laboratories (JDL) Information Fusion model in 2004 discussed information processing, incorporated the analyst, and was coined the Data Fusion Information Group (DFIG) model. Since that time, developments in information technology (e.g., cloud computing, applications, and multimedia) have altered the role of the analyst. Data production has outpaced the analyst; however the analyst still has the role of data refinement and information reporting. In this paper, we highlight three examples being addressed by the DFIG model. One example is the role of the analyst to provide semantic queries (through an ontology) so that vast amount of data available can be indexed, accessed, retrieved, and processed. The second idea is reporting which requires the analyst to collect the data into a condensed and meaningful form through information management. The last example is the interpretation of the resolved information from data that must include contextual information not inherent in the data itself. Through a literature review, the DFIG developments in the last decade demonstrate the usability of the DFIG model to bring together the user (analyst or operator) and the machine (information fusion or manager) in a systems design.
Physics-based and human-derived information fusion for analysts
NASA Astrophysics Data System (ADS)
Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael
2017-05-01
Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.
FARE-CAFE: a database of functional and regulatory elements of cancer-associated fusion events.
Korla, Praveen Kumar; Cheng, Jack; Huang, Chien-Hung; Tsai, Jeffrey J P; Liu, Yu-Hsuan; Kurubanjerdjit, Nilubon; Hsieh, Wen-Tsong; Chen, Huey-Yi; Ng, Ka-Lok
2015-01-01
Chromosomal translocation (CT) is of enormous clinical interest because this disorder is associated with various major solid tumors and leukemia. A tumor-specific fusion gene event may occur when a translocation joins two separate genes. Currently, various CT databases provide information about fusion genes and their genomic elements. However, no database of the roles of fusion genes, in terms of essential functional and regulatory elements in oncogenesis, is available. FARE-CAFE is a unique combination of CTs, fusion proteins, protein domains, domain-domain interactions, protein-protein interactions, transcription factors and microRNAs, with subsequent experimental information, which cannot be found in any other CT database. Genomic DNA information including, for example, manually collected exact locations of the first and second break points, sequences and karyotypes of fusion genes are included. FARE-CAFE will substantially facilitate the cancer biologist's mission of elucidating the pathogenesis of various types of cancer. This database will ultimately help to develop 'novel' therapeutic approaches. Database URL: http://ppi.bioinfo.asia.edu.tw/FARE-CAFE. © The Author(s) 2015. Published by Oxford University Press.
FARE-CAFE: a database of functional and regulatory elements of cancer-associated fusion events
Korla, Praveen Kumar; Cheng, Jack; Huang, Chien-Hung; Tsai, Jeffrey J. P.; Liu, Yu-Hsuan; Kurubanjerdjit, Nilubon; Hsieh, Wen-Tsong; Chen, Huey-Yi; Ng, Ka-Lok
2015-01-01
Chromosomal translocation (CT) is of enormous clinical interest because this disorder is associated with various major solid tumors and leukemia. A tumor-specific fusion gene event may occur when a translocation joins two separate genes. Currently, various CT databases provide information about fusion genes and their genomic elements. However, no database of the roles of fusion genes, in terms of essential functional and regulatory elements in oncogenesis, is available. FARE-CAFE is a unique combination of CTs, fusion proteins, protein domains, domain–domain interactions, protein–protein interactions, transcription factors and microRNAs, with subsequent experimental information, which cannot be found in any other CT database. Genomic DNA information including, for example, manually collected exact locations of the first and second break points, sequences and karyotypes of fusion genes are included. FARE-CAFE will substantially facilitate the cancer biologist’s mission of elucidating the pathogenesis of various types of cancer. This database will ultimately help to develop ‘novel’ therapeutic approaches. Database URL: http://ppi.bioinfo.asia.edu.tw/FARE-CAFE PMID:26384373
Design of a multisensor data fusion system for target detection
NASA Astrophysics Data System (ADS)
Thomopoulos, Stelios C.; Okello, Nickens N.; Kadar, Ivan; Lovas, Louis A.
1993-09-01
The objective of this paper is to discuss the issues that are involved in the design of a multisensor fusion system and provide a systematic analysis and synthesis methodology for the design of the fusion system. The system under consideration consists of multifrequency (similar) radar sensors. However, the fusion design must be flexible to accommodate additional dissimilar sensors such as IR, EO, ESM, and Ladar. The motivation for the system design is the proof of the fusion concept for enhancing the detectability of small targets in clutter. In the context of down-selecting the proper configuration for multisensor (similar and dissimilar, and centralized vs. distributed) data fusion, the issues of data modeling, fusion approaches, and fusion architectures need to be addressed for the particular application being considered. Although the study of different approaches may proceed in parallel, the interplay among them is crucial in selecting a fusion configuration for a given application. The natural sequence for addressing the three different issues is to begin from the data modeling, in order to determine the information content of the data. This information will dictate the appropriate fusion approach. This, in turn, will lead to a global fusion architecture. Both distributed and centralized fusion architectures are used to illustrate the design issues along with Monte-Carlo simulation performance comparison of a single sensor versus a multisensor centrally fused system.
Data fusion for delivering advanced traveler information services
DOT National Transportation Integrated Search
2003-05-01
Many transportation professionals have suggested that improved ATIS data fusion techniques and processing will improve the overall quality, timeliness, and usefulness of traveler information. The purpose of this study was four fold. First, conduct a ...
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.
Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui
2017-03-29
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
Path to bio-nano-information fusion.
Chen, Jia Ming; Ho, Chih-Ming
2006-12-01
This article will discuss the challenges in a new convergent discipline created by the fusion of biotechnology, nanotechnology, and information technology. To illustrate the research challenges, we will begin with an introduction to the nanometer-scale environment in which biology resides, and point out the many important behaviors of matters at that scale. Then we will describe an ideal model system, the cell, for bio-nano-information fusion. Our efforts in advancing this field at the Institute of Cell Mimetic Space Exploration (CMISE) will be introduced here as an example to move toward achieving this goal.
Viscoelastic modeling of the fusion of multicellular tumor spheroids in growth phase.
Dechristé, Guillaume; Fehrenbach, Jérôme; Griseti, Elena; Lobjois, Valérie; Poignard, Clair
2018-06-08
Since several decades, the experiments have highlighted the analogy of fusing cell aggregates with liquid droplets. The physical macroscopic models have been derived under incompressible assumptions. The aim of this paper is to provide a 3D model of growing spheroids, which is more relevant regarding embryo cell aggregates or tumor cell spheroids. We extend the past approach to a compressible 3D framework in order to account for the tumor spheroid growth. We exhibit the crucial importance of the effective surface tension, and of the inner pressure of the spheroid to describe precisely the fusion. The experimental data were obtained on spheroids of colon carcinoma human cells (HCT116 cell line). After 3 or 6 days of culture, two identical spheroids were transferred in one well and their fusion was monitored by live videomicroscopy acquisition each 2 h during 72 h. From these images the neck radius and the diameter of the assembly of the fusing spheroids are extracted. The numerical model is fitted with the experiments. It is worth noting that the time evolution of both neck radius and spheroid diameter are quantitatively obtained. The interesting feature lies in the fact that such measurements characterise the macroscopic rheological properties of the tumor spheroids. The experimental determination of the kinetics of neck radius and overall diameter during spheroids fusion characterises the rheological properties of the spheroids. The consistency of the model is shown by fitting the model with two different experiments, enhancing the importance of both surface tension and cell proliferation. The paper sheds new light on the macroscopic rheological properties of tumor spheroids. It emphasizes the role of the surface tension and the inner pressure in the fusion of growing spheroid. Under geometrical assumptions, the model reduces to a 2-parameter differential equation fit with experimental measurements. The 3-D partial differential system makes it possible to study the fusion of spheroids in non-symmetrical or more general frameworks. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng,Y.; Liu, J.; Zheng, Q.
Entry of SARS coronavirus into its target cell requires large-scale structural transitions in the viral spike (S) glycoprotein in order to induce fusion of the virus and cell membranes. Here we describe the identification and crystal structures of four distinct a-helical domains derived from the highly conserved heptad-repeat (HR) regions of the S2 fusion subunit. The four domains are an antiparallel four-stranded coiled coil, a parallel trimeric coiled coil, a four-helix bundle, and a six-helix bundle that is likely the final fusogenic form of the protein. When considered together, the structural and thermodynamic features of the four domains suggest amore » possible mechanism whereby the HR regions, initially sequestered in the native S glycoprotein spike, are released and refold sequentially to promote membrane fusion. Our results provide a structural framework for understanding the control of membrane fusion and should guide efforts to intervene in the SARS coronavirus entry process.« less
NASA Astrophysics Data System (ADS)
Schneider, M.; Johnson, T.; Dumont, R.; Eriksson, J.; Eriksson, L.-G.; Giacomelli, L.; Girardo, J.-B.; Hellsten, T.; Khilkevitch, E.; Kiptily, V. G.; Koskela, T.; Mantsinen, M.; Nocente, M.; Salewski, M.; Sharapov, S. E.; Shevelev, A. E.; Contributors, JET
2016-11-01
Recent JET experiments have been dedicated to the studies of fusion reactions between deuterium (D) and Helium-3 (3He) ions using neutral beam injection (NBI) in synergy with third harmonic ion cyclotron radio-frequency heating (ICRH) of the beam. This scenario generates a fast ion deuterium tail enhancing DD and D3He fusion reactions. Modelling and measuring the fast deuterium tail accurately is essential for quantifying the fusion products. This paper presents the modelling of the D distribution function resulting from the NBI+ICRF heating scheme, reinforced by a comparison with dedicated JET fast ion diagnostics, showing an overall good agreement. Finally, a sawtooth activity for these experiments has been observed and interpreted using SPOT/RFOF simulations in the framework of Porcelli’s theoretical model, where NBI+ICRH accelerated ions are found to have a strong stabilizing effect, leading to monster sawteeth.
Multiple feature fusion via covariance matrix for visual tracking
NASA Astrophysics Data System (ADS)
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Cooperative angle-only orbit initialization via fusion of admissible areas
NASA Astrophysics Data System (ADS)
Jia, Bin; Pham, Khanh; Blasch, Erik; Chen, Genshe; Shen, Dan; Wang, Zhonghai
2017-05-01
For the short-arc angle only orbit initialization problem, the admissible area is often used. However, the accuracy using a single sensor is often limited. For high value space objects, it is desired to achieve more accurate results. Fortunately, multiple sensors, which are dedicated to space situational awareness, are available. The work in this paper uses multiple sensors' information to cooperatively initialize the orbit based on the fusion of multiple admissible areas. Both the centralized fusion and decentralized fusion are discussed. Simulation results verify the expectation that the orbit initialization accuracy is improved by using information from multiple sensors.
Information Fusion Issues in the UK Environmental Science Community
NASA Astrophysics Data System (ADS)
Giles, J. R.
2010-12-01
The Earth is a complex, interacting system which cannot be neatly divided by discipline boundaries. To gain an holistic understanding of even a component of an Earth System requires researchers to draw information from multiple disciplines and integrate these to develop a broader understanding. But the barriers to achieving this are formidable. Research funders attempting to encourage the integration of information across disciplines need to take into account culture issues, the impact of intrusion of projects on existing information systems, ontologies and semantics, scale issues, heterogeneity and the uncertainties associated with combining information from diverse sources. Culture - There is a cultural dualism in the environmental sciences were information sharing is both rewarded and discouraged. Researchers who share information both gain new opportunities and risk reducing their chances of being first author in an high-impact journal. The culture of the environmental science community has to be managed to ensure that information fusion activities are encouraged. Intrusion - Existing information systems have an inertia of there own because of the intellectual and financial capital invested within them. Information fusion activities must recognise and seek to minimise the potential impact of their projects on existing systems. Low intrusion information fusions systems such as OGC web-service and the OpenMI Standard are to be preferred to whole-sale replacement of existing systems. Ontology and Semantics - Linking information across disciplines requires a clear understanding of the concepts deployed in the vocabulary used to describe them. Such work is a critical first step to creating routine information fusion. It is essential that national bodies, such as geological surveys organisations, document and publish their ontologies, semantics, etc. Scale - Environmental processes operate at scales ranging from microns to the scale of the Solar System and potentially beyond. The many different scales involved provide serious challenges to information fusion which need to be researched. Heterogeneity - Natural systems are heterogeneous, that is a system consisting of multiple components each of which may have considerable internal variation. Modelling Earth Systems requires recognition of the inherent complexity. Uncertainty - Understanding the uncertainties within a single information source can be difficult. Understanding the uncertainties across a system of linked models, each drawn from multiple information resources, represents a considerable challenge that must be addressed. The challenges to overcome appear insurmountable to individual research groups; but the potential rewards, in terms of a fuller scientific understanding of Earth Systems, are significant. A major international effort must be mounted to tackle these barriers and enable routine information fusion.
NASA Astrophysics Data System (ADS)
Bomark, N.-E.; Moretti, S.; Roszkowski, L.
2016-10-01
A detection at the large hadron collider of a light Higgs pseudoscalar would, if interpreted in a supersymmetric framework, be a smoking gun signature of non-minimal supersymmetry. In this work in the framework of the next-to-minimal supersymmetric standard model we focus on vector boson fusion and Higgs-strahlung production of heavier scalars that subsequently decay into pairs of light pseudoscalars. We demonstrate that although these channels have in general very limited reach, they are viable for the detection of light pseudoscalars in some parts of parameter space and can serve as an important complementary probe to the dominant gluon-fusion production mode. We also demonstrate that in a Higgs factory these channels may reach sensitivities comparable to or even exceeding the gluon fusion channels at the Large Hadron Collider, thus possibly rendering this our best option to discover a light pseudoscalar. It is also worth mentioning that for the singlet dominated scalar this may be the only way to measure its couplings to gauge bosons. Especially promising are channels where the initial scalar is radiated off a W as these events have relatively high rates and provide substantial background suppression due to leptons from the W. We identify three benchmark points that well represent the above scenarios. Assuming that the masses of the scalars and pseudoscalars are already measured in the gluon-fusion channel, the event kinematics can be further constrained, hence significantly improving detection prospects. This is especially important in the Higgs-strahlung channels with rather heavy scalars, and results in possible detection at 200 fb-1 for the most favoured parts of the parameter space.
Levéen, Per; Brunnström, Hans; Reuterswärd, Christel; Holm, Karolina; Jönsson, Mats; Annersten, Karin; Rosengren, Frida; Jirström, Karin; Kosieradzki, Jaroslaw; Ek, Lars; Borg, Åke; Planck, Maria; Jönsson, Göran; Staaf, Johan
2017-01-01
Precision medicine requires accurate multi-gene clinical diagnostics. We describe the implementation of an Illumina TruSight Tumor (TST) clinical NGS diagnostic framework and parallel validation of a NanoString RNA-based ALK, RET, and ROS1 gene fusion assay for combined analysis of treatment predictive alterations in non-small cell lung cancer (NSCLC) in a regional healthcare region of Sweden (Scandinavia). The TST panel was clinically validated in 81 tumors (99% hotspot mutation concordance), after which 533 consecutive NSCLCs were collected during one-year of routine clinical analysis in the healthcare region (˜90% advanced stage patients). The NanoString assay was evaluated in 169 of 533 cases. In the 533-sample cohort 79% had 1-2 variants, 12% >2 variants and 9% no detected variants. Ten gene fusions (five ALK, three RET, two ROS1) were detected in 135 successfully analyzed cases (80% analysis success rate). No ALK or ROS1 FISH fusion positive case was missed by the NanoString assay. Stratification of the 533-sample cohort based on actionable alterations in 11 oncogenes revealed that 66% of adenocarcinomas, 13% of squamous carcinoma (SqCC) and 56% of NSCLC not otherwise specified harbored ≥1 alteration. In adenocarcinoma, 10.6% of patients (50.3% if including KRAS) could potentially be eligible for emerging therapeutics, in addition to the 15.3% of patients eligible for standard EGFR or ALK inhibitors. For squamous carcinoma corresponding proportions were 4.4% (11.1% with KRAS) vs 2.2%. In conclusion, multiplexed NGS and gene fusion analyses are feasible in NSCLC for clinical diagnostics, identifying notable proportions of patients potentially eligible for emerging molecular therapeutics. PMID:28415793
A multi-data stream assimilation framework for the assessment of volcanic unrest
NASA Astrophysics Data System (ADS)
Gregg, Patricia M.; Pettijohn, J. Cory
2016-01-01
Active volcanoes pose a constant risk to populations living in their vicinity. Significant effort has been spent to increase monitoring and data collection campaigns to mitigate potential volcano disasters. To utilize these datasets to their fullest extent, a new generation of model-data fusion techniques is required that combine multiple, disparate observations of volcanic activity with cutting-edge modeling techniques to provide efficient assessment of volcanic unrest. The purpose of this paper is to develop a data assimilation framework for volcano applications. Specifically, the Ensemble Kalman Filter (EnKF) is adapted to assimilate GPS and InSAR data into viscoelastic, time-forward, finite element models of an evolving magma system to provide model forecasts and error estimations. Since the goal of this investigation is to provide a methodological framework, our efforts are focused on theoretical development and synthetic tests to illustrate the effectiveness of the EnKF and its applicability in physical volcanology. The synthetic tests provide two critical results: (1) a proof of concept for using the EnKF for multi dataset assimilation in investigations of volcanic activity; and (2) the comparison of spatially limited, but temporally dense, GPS data with temporally limited InSAR observations for evaluating magma chamber dynamics during periods of volcanic unrest. Results indicate that the temporally dense information provided by GPS observations results in faster convergence and more accurate model predictions. However, most importantly, the synthetic tests illustrate that the EnKF is able to swiftly respond to data updates by changing the model forecast trajectory to match incoming observations. The synthetic results demonstrate a great potential for utilizing the EnKF model-data fusion method to assess volcanic unrest and provide model forecasts. The development of these new techniques provides: (1) a framework for future applications of rapid data assimilation and model development during volcanic crises; (2) a method for hind-casting to investigate previous volcanic eruptions, including potential eruption triggering mechanisms and precursors; and (3) an approach for optimizing survey designs for future data collection campaigns at active volcanic systems.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-01-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images. PMID:29062159
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-03-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.
Coupled-channel analyses on 16O + 147,148,150,152,154Sm heavy-ion fusion reactions
NASA Astrophysics Data System (ADS)
Erol, Burcu; Yılmaz, Ahmet Hakan
2018-02-01
Heavy-ion collisons are typically characterized by the presence of many open reaction channels. In the energies around the Coulomb barrier, the main processes are elastic scattering, inelastic excitations of low-lying modes and fusion operations of one or two nuclei. The fusion process is generally defined as the effect of one-dimensional barrier penetration model, taking scattering potential as the sum of Coulomb and proximity potential. We have performed heay-ion fusion reactions with coupled-channel (CC) calculations. Coupled-channel formalism is carried out under barrier energy in heavy-ion fusion reactions. In this work fusion cross sections have been calculated and analyzed in detail for the five systems 16O + 147,148,150,152,154sm in the framework of coupled-channel approach (using the codes CCFUS and CCDEF) and Wong Formula. Calculated results are compared with experimental data, CC calculations using code CCFULL and with the cross section datas taken from `nrv'. CCDEF, CCFULL and Wong Formula explains the fusion reactions of heavy-ions very well, while using the scattering potential as WOODS-SAXON volume potential with Akyuz-Winther parameters. It was observed that AW potential parameters are able to reproduce the experimentally observed fusion cross sections reasonably well for these systems. There is a good agreement between the calculated results with the experimental and nrv[8] results.
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
A genetically encoded fluorescent tRNA is active in live-cell protein synthesis
Masuda, Isao; Igarashi, Takao; Sakaguchi, Reiko; Nitharwal, Ram G.; Takase, Ryuichi; Han, Kyu Young; Leslie, Benjamin J.; Liu, Cuiping; Gamper, Howard; Ha, Taekjip; Sanyal, Suparna
2017-01-01
Abstract Transfer RNAs (tRNAs) perform essential tasks for all living cells. They are major components of the ribosomal machinery for protein synthesis and they also serve in non-ribosomal pathways for regulation and signaling metabolism. We describe the development of a genetically encoded fluorescent tRNA fusion with the potential for imaging in live Escherichia coli cells. This tRNA fusion carries a Spinach aptamer that becomes fluorescent upon binding of a cell-permeable and non-toxic fluorophore. We show that, despite having a structural framework significantly larger than any natural tRNA species, this fusion is a viable probe for monitoring tRNA stability in a cellular quality control mechanism that degrades structurally damaged tRNA. Importantly, this fusion is active in E. coli live-cell protein synthesis allowing peptidyl transfer at a rate sufficient to support cell growth, indicating that it is accommodated by translating ribosomes. Imaging analysis shows that this fusion and ribosomes are both excluded from the nucleoid, indicating that the fusion and ribosomes are in the cytosol together possibly engaged in protein synthesis. This fusion methodology has the potential for developing new tools for live-cell imaging of tRNA with the unique advantage of both stoichiometric labeling and broader application to all cells amenable to genetic engineering. PMID:27956502
Land Surface Verification Toolkit (LVT) - A Generalized Framework for Land Surface Model Evaluation
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.; Peters-Lidard, Christa D.; Santanello, Joseph; Harrison, Ken; Liu, Yuqiong; Shaw, Michael
2011-01-01
Model evaluation and verification are key in improving the usage and applicability of simulation models for real-world applications. In this article, the development and capabilities of a formal system for land surface model evaluation called the Land surface Verification Toolkit (LVT) is described. LVT is designed to provide an integrated environment for systematic land model evaluation and facilitates a range of verification approaches and analysis capabilities. LVT operates across multiple temporal and spatial scales and employs a large suite of in-situ, remotely sensed and other model and reanalysis datasets in their native formats. In addition to the traditional accuracy-based measures, LVT also includes uncertainty and ensemble diagnostics, information theory measures, spatial similarity metrics and scale decomposition techniques that provide novel ways for performing diagnostic model evaluations. Though LVT was originally designed to support the land surface modeling and data assimilation framework known as the Land Information System (LIS), it also supports hydrological data products from other, non-LIS environments. In addition, the analysis of diagnostics from various computational subsystems of LIS including data assimilation, optimization and uncertainty estimation are supported within LVT. Together, LIS and LVT provide a robust end-to-end environment for enabling the concepts of model data fusion for hydrological applications. The evolving capabilities of LVT framework are expected to facilitate rapid model evaluation efforts and aid the definition and refinement of formal evaluation procedures for the land surface modeling community.
Homeland security application of the Army Soft Target Exploitation and Fusion (STEF) system
NASA Astrophysics Data System (ADS)
Antony, Richard T.; Karakowski, Joseph A.
2010-04-01
A fusion system that accommodates both text-based extracted information along with more conventional sensor-derived input has been developed and demonstrated in a terrorist attack scenario as part of the Empire Challenge (EC) 09 Exercise. Although the fusion system was developed to support Army military analysts, the system, based on a set of foundational fusion principles, has direct applicability to department of homeland security (DHS) & defense, law enforcement, and other applications. Several novel fusion technologies and applications were demonstrated in EC09. One such technology is location normalization that accommodates both fuzzy semantic expressions such as behind Library A, across the street from the market place, as well as traditional spatial representations. Additionally, the fusion system provides a range of fusion products not supported by traditional fusion algorithms. Many of these additional capabilities have direct applicability to DHS. A formal test of the fusion system was performed during the EC09 exercise. The system demonstrated that it was able to (1) automatically form tracks, (2) help analysts visualize behavior of individuals over time, (3) link key individuals based on both explicit message-based information as well as discovered (fusion-derived) implicit relationships, and (4) suggest possible individuals of interest based on their association with High Value Individuals (HVI) and user-defined key locations.
Data fusion in cyber security: first order entity extraction from common cyber data
NASA Astrophysics Data System (ADS)
Giacobe, Nicklaus A.
2012-06-01
The Joint Directors of Labs Data Fusion Process Model (JDL Model) provides a framework for how to handle sensor data to develop higher levels of inference in a complex environment. Beginning from a call to leverage data fusion techniques in intrusion detection, there have been a number of advances in the use of data fusion algorithms in this subdomain of cyber security. While it is tempting to jump directly to situation-level or threat-level refinement (levels 2 and 3) for more exciting inferences, a proper fusion process starts with lower levels of fusion in order to provide a basis for the higher fusion levels. The process begins with first order entity extraction, or the identification of important entities represented in the sensor data stream. Current cyber security operational tools and their associated data are explored for potential exploitation, identifying the first order entities that exist in the data and the properties of these entities that are described by the data. Cyber events that are represented in the data stream are added to the first order entities as their properties. This work explores typical cyber security data and the inferences that can be made at the lower fusion levels (0 and 1) with simple metrics. Depending on the types of events that are expected by the analyst, these relatively simple metrics can provide insight on their own, or could be used in fusion algorithms as a basis for higher levels of inference.
A local approach for focussed Bayesian fusion
NASA Astrophysics Data System (ADS)
Sander, Jennifer; Heizmann, Michael; Goussev, Igor; Beyerer, Jürgen
2009-04-01
Local Bayesian fusion approaches aim to reduce high storage and computational costs of Bayesian fusion which is separated from fixed modeling assumptions. Using the small world formalism, we argue why this proceeding is conform with Bayesian theory. Then, we concentrate on the realization of local Bayesian fusion by focussing the fusion process solely on local regions that are task relevant with a high probability. The resulting local models correspond then to restricted versions of the original one. In a previous publication, we used bounds for the probability of misleading evidence to show the validity of the pre-evaluation of task specific knowledge and prior information which we perform to build local models. In this paper, we prove the validity of this proceeding using information theoretic arguments. For additional efficiency, local Bayesian fusion can be realized in a distributed manner. Here, several local Bayesian fusion tasks are evaluated and unified after the actual fusion process. For the practical realization of distributed local Bayesian fusion, software agents are predestinated. There is a natural analogy between the resulting agent based architecture and criminal investigations in real life. We show how this analogy can be used to improve the efficiency of distributed local Bayesian fusion additionally. Using a landscape model, we present an experimental study of distributed local Bayesian fusion in the field of reconnaissance, which highlights its high potential.
Open data models for smart health interconnected applications: the example of openEHR.
Demski, Hans; Garde, Sebastian; Hildebrand, Claudia
2016-10-22
Smart Health is known as a concept that enhances networking, intelligent data processing and combining patient data with other parameters. Open data models can play an important role in creating a framework for providing interoperable data services that support the development of innovative Smart Health applications profiting from data fusion and sharing. This article describes a model-driven engineering approach based on standardized clinical information models and explores its application for the development of interoperable electronic health record systems. The following possible model-driven procedures were considered: provision of data schemes for data exchange, automated generation of artefacts for application development and native platforms that directly execute the models. The applicability of the approach in practice was examined using the openEHR framework as an example. A comprehensive infrastructure for model-driven engineering of electronic health records is presented using the example of the openEHR framework. It is shown that data schema definitions to be used in common practice software development processes can be derived from domain models. The capabilities for automatic creation of implementation artefacts (e.g., data entry forms) are demonstrated. Complementary programming libraries and frameworks that foster the use of open data models are introduced. Several compatible health data platforms are listed. They provide standard based interfaces for interconnecting with further applications. Open data models help build a framework for interoperable data services that support the development of innovative Smart Health applications. Related tools for model-driven application development foster semantic interoperability and interconnected innovative applications.
NASA Astrophysics Data System (ADS)
Scheele, C. J.; Huang, Q.
2016-12-01
In the past decade, the rise in social media has led to the development of a vast number of social media services and applications. Disaster management represents one of such applications leveraging massive data generated for event detection, response, and recovery. In order to find disaster relevant social media data, current approaches utilize natural language processing (NLP) methods based on keywords, or machine learning algorithms relying on text only. However, these approaches cannot be perfectly accurate due to the variability and uncertainty in language used on social media. To improve current methods, the enhanced text-mining framework is proposed to incorporate location information from social media and authoritative remote sensing datasets for detecting disaster relevant social media posts, which are determined by assessing the textual content using common text mining methods and how the post relates spatiotemporally to the disaster event. To assess the framework, geo-tagged Tweets were collected for three different spatial and temporal disaster events: hurricane, flood, and tornado. Remote sensing data and products for each event were then collected using RealEarthTM. Both Naive Bayes and Logistic Regression classifiers were used to compare the accuracy within the enhanced text-mining framework. Finally, the accuracies from the enhanced text-mining framework were compared to the current text-only methods for each of the case study disaster events. The results from this study address the need for more authoritative data when using social media in disaster management applications.
Activity recognition using dynamic multiple sensor fusion in body sensor networks.
Gao, Lei; Bourke, Alan K; Nelson, John
2012-01-01
Multiple sensor fusion is a main research direction for activity recognition. However, there are two challenges in those systems: the energy consumption due to the wireless transmission and the classifier design because of the dynamic feature vector. This paper proposes a multi-sensor fusion framework, which consists of the sensor selection module and the hierarchical classifier. The sensor selection module adopts the convex optimization to select the sensor subset in real time. The hierarchical classifier combines the Decision Tree classifier with the Naïve Bayes classifier. The dataset collected from 8 subjects, who performed 8 scenario activities, was used to evaluate the proposed system. The results show that the proposed system can obviously reduce the energy consumption while guaranteeing the recognition accuracy.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Influence of incomplete fusion on complete fusion at energies above the Coulomb barrier
NASA Astrophysics Data System (ADS)
Shuaib, Mohd; Sharma, Vijay R.; Yadav, Abhishek; Sharma, Manoj Kumar; Singh, Pushpendra P.; Singh, Devendra P.; Kumar, R.; Singh, R. P.; Muralithar, S.; Singh, B. P.; Prasad, R.
2017-10-01
In the present work, excitation functions of several reaction residues in the system 19F+169Tm, populated via the complete and incomplete fusion processes, have been measured using off-line γ-ray spectroscopy. The analysis of excitation functions has been done within the framework of statistical model code pace4. The excitation functions of residues populated via xn and pxn channels are found to be in good agreement with those estimated by the theoretical model code, which confirms the production of these residues solely via complete fusion process. However, a significant enhancement has been observed in the cross-sections of residues involving α-emitting channels as compared to the theoretical predictions. The observed enhancement in the cross-sections has been attributed to the incomplete fusion processes. In order to have a better insight into the onset and strength of incomplete fusion, the incomplete fusion strength function has been deduced. At present, there is no theoretical model available which can satisfactorily explain the incomplete fusion reaction data at energies ≈4-6 MeV/nucleon. In the present work, the influence of incomplete fusion on complete fusion in the 19F+169Tm system has also been studied. The measured cross-section data may be important for the development of reactor technology as well. It has been found that the incomplete fusion strength function strongly depends on the α-Q value of the projectile, which is found to be in good agreement with the existing literature data. The analysis strongly supports the projectile-dependent mass-asymmetry systematics. In order to study the influence of Coulomb effect ({Z}{{P}}{Z}{{T}}) on incomplete fusion, the deduced strength function for the present work is compared with the nearby projectile-target combinations. The incomplete fusion strength function is found to increase linearly with {Z}{{P}}{Z}{{T}}, indicating a strong influence of Coulomb effect in the incomplete fusion reactions.
Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang
2016-01-01
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931
Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang
2016-06-09
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available. PMID:23112602
Multi-sensor fusion of infrared and electro-optic signals for high resolution night images.
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.
Comparison and evaluation of fusion methods used for GF-2 satellite image in coastal mangrove area
NASA Astrophysics Data System (ADS)
Ling, Chengxing; Ju, Hongbo; Liu, Hua; Zhang, Huaiqing; Sun, Hua
2018-04-01
GF-2 satellite is the highest spatial resolution Remote Sensing Satellite of the development history of China's satellite. In this study, three traditional fusion methods including Brovey, Gram-Schmidt and Color Normalized (CN were used to compare with the other new fusion method NNDiffuse, which used the qualitative assessment and quantitative fusion quality index, including information entropy, variance, mean gradient, deviation index, spectral correlation coefficient. Analysis results show that NNDiffuse method presented the optimum in qualitative and quantitative analysis. It had more effective for the follow up of remote sensing information extraction and forest, wetland resources monitoring applications.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Interagency and Multinational Information Sharing Architecture and Solutions (IMISAS) Project
2012-02-01
Defense (DOD) Enterprise Unclassified Information Sharing Service, August 10, 2010 12 Lindenmayer, Martin J. Civil Information and Intelligence Fusion...Organizations (1:2), 44-65. Lindenmayer, Martin J. Civil Information and Intelligence Fusion: Making “Non-Traditional” into “New Traditional” for...perceived as a good start which needs more development. References [Badke-Schaub et al. 2008] Badke-Schaub, Petra ; Hofinger, Gesine; Lauche
DOT National Transportation Integrated Search
2010-06-01
This guidebook provides an overview of the mission and functions of transportation management centers, emergency operations centers, and fusion centers. The guidebook focuses on the types of information these centers produce and manage and how the sh...
DOT National Transportation Integrated Search
2010-06-01
This guidebook provides an overview of the mission and functions of transportation management centers, emergency operations centers, and fusion centers. The guidebook focuses on the types of information these centers produce and manage and how the sh...
Classification of Sporting Activities Using Smartphone Accelerometers
Mitchell, Edmond; Monaghan, David; O'Connor, Noel E.
2013-01-01
In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT). Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today's society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach. PMID:23604031
Binding and unbinding the auditory and visual streams in the McGurk effect.
Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc
2012-08-01
Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff
2014-08-15
Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less
Adaptive polarization image fusion based on regional energy dynamic weighted average
NASA Astrophysics Data System (ADS)
Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai
2005-11-01
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Embedding the results of focussed Bayesian fusion into a global context
NASA Astrophysics Data System (ADS)
Sander, Jennifer; Heizmann, Michael
2014-05-01
Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.
Li, Feilong; Li, Zhiqiang; Li, Guangxia; Dong, Feihong; Zhang, Wei
2017-01-01
The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU) with sufficient protection to licensed primary user (PU). Hence, recent scientific literature has been focused on the tradeoff between spectrum reuse and PU protection within narrowband spectrum sensing (SS) in terrestrial wireless sensing networks. However, those narrowband SS techniques investigated in the context of terrestrial CR may not be applicable for detecting wideband satellite signals. In this paper, we mainly investigate the problem of joint designing sensing time and hard fusion scheme to maximize SU spectral efficiency in the scenario of low earth orbit (LEO) mobile satellite services based on wideband spectrum sensing. Compressed detection model is established to prove that there indeed exists one optimal sensing time achieving maximal spectral efficiency. Moreover, we propose novel wideband cooperative spectrum sensing (CSS) framework where each SU reporting duration can be utilized for its following SU sensing. The sensing performance benefits from the novel CSS framework because the equivalent sensing time is extended by making full use of reporting slot. Furthermore, in respect of time-varying channel, the spatiotemporal CSS (ST-CSS) is presented to attain space and time diversity gain simultaneously under hard decision fusion rule. Computer simulations show that the optimal sensing settings algorithm of joint optimization of sensing time, hard fusion rule and scheduling strategy achieves significant improvement in spectral efficiency. Additionally, the novel ST-CSS scheme performs much higher spectral efficiency than that of general CSS framework. PMID:28117712
Hamilton, Brian S.; Whittaker, Gary R.; Daniel, Susan
2012-01-01
Hemagglutinin (HA) is the viral protein that facilitates the entry of influenza viruses into host cells. This protein controls two critical aspects of entry: virus binding and membrane fusion. In order for HA to carry out these functions, it must first undergo a priming step, proteolytic cleavage, which renders it fusion competent. Membrane fusion commences from inside the endosome after a drop in lumenal pH and an ensuing conformational change in HA that leads to the hemifusion of the outer membrane leaflets of the virus and endosome, the formation of a stalk between them, followed by pore formation. Thus, the fusion machinery is an excellent target for antiviral compounds, especially those that target the conserved stem region of the protein. However, traditional ensemble fusion assays provide a somewhat limited ability to directly quantify fusion partly due to the inherent averaging of individual fusion events resulting from experimental constraints. Inspired by the gains achieved by single molecule experiments and analysis of stochastic events, recently-developed individual virion imaging techniques and analysis of single fusion events has provided critical information about individual virion behavior, discriminated intermediate fusion steps within a single virion, and allowed the study of the overall population dynamics without the loss of discrete, individual information. In this article, we first start by reviewing the determinants of HA fusogenic activity and the viral entry process, highlight some open questions, and then describe the experimental approaches for assaying fusion that will be useful in developing the most effective therapies in the future. PMID:22852045
Integrating public health and medical intelligence gathering into homeland security fusion centres.
Lenart, Brienne; Albanese, Joseph; Halstead, William; Schlegelmilch, Jeffrey; Paturas, James
Homeland security fusion centres serve to gather, analyse and share threat-related information among all levels of governments and law enforcement agencies. In order to function effectively, fusion centres must employ people with the necessary competencies to understand the nature of the threat facing a community, discriminate between important information and irrelevant or merely interesting facts and apply domain knowledge to interpret the results to obviate or reduce the existing danger. Public health and medical sector personnel routinely gather, analyse and relay health-related inform-ation, including health security risks, associated with the detection of suspicious biological or chemical agents within a community to law enforcement agencies. This paper provides a rationale for the integration of public health and medical personnel in fusion centres and describes their role in assisting law enforcement agencies, public health organisations and the medical sector to respond to natural or intentional threats against local communities, states or the nation as a whole.
Dynamics simulations for engineering macromolecular interactions
NASA Astrophysics Data System (ADS)
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A.; Way, Jeffrey
2013-06-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20 000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions.
Dynamics simulations for engineering macromolecular interactions.
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A; Way, Jeffrey
2013-06-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20,000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions.
A Bio-Inspired Herbal Tea Flavour Assessment Technique
Zakaria, Nur Zawatil Isqi; Masnan, Maz Jamilah; Zakaria, Ammar; Shakaff, Ali Yeon Md
2014-01-01
Herbal-based products are becoming a widespread production trend among manufacturers for the domestic and international markets. As the production increases to meet the market demand, it is very crucial for the manufacturer to ensure that their products have met specific criteria and fulfil the intended quality determined by the quality controller. One famous herbal-based product is herbal tea. This paper investigates bio-inspired flavour assessments in a data fusion framework involving an e-nose and e-tongue. The objectives are to attain good classification of different types and brands of herbal tea, classification of different flavour masking effects and finally classification of different concentrations of herbal tea. Two data fusion levels were employed in this research, low level data fusion and intermediate level data fusion. Four classification approaches; LDA, SVM, KNN and PNN were examined in search of the best classifier to achieve the research objectives. In order to evaluate the classifiers' performance, an error estimator based on k-fold cross validation and leave-one-out were applied. Classification based on GC-MS TIC data was also included as a comparison to the classification performance using fusion approaches. Generally, KNN outperformed the other classification techniques for the three flavour assessments in the low level data fusion and intermediate level data fusion. However, the classification results based on GC-MS TIC data are varied. PMID:25010697
Runtime Simulation for Post-Disaster Data Fusion Visualization
2006-10-01
Center for Multisource Information Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA kesh@eng.buffalo.edu ABSTRACT...Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
ERIC Educational Resources Information Center
Glasstone, Samuel
This publication is one of a series of information booklets for the general public published by The United States Atomic Energy Commission. Among the topics discussed are: Importance of Fusion Energy; Conditions for Nuclear Fusion; Thermonuclear Reactions in Plasmas; Plasma Confinement by Magnetic Fields; Experiments With Plasmas; High-Temperature…
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Chase: Control of Heterogeneous Autonomous Sensors for Situational Awareness
2016-08-03
remained the discovery and analysis of new foundational methodology for information collection and fusion that exercises rigorous feedback control over...simultaneously achieve quantified information and physical objectives. New foundational methodology for information collection and fusion that exercises...11.2.1. In the general area of novel stochastic systems analysis it seems appropriate to mention the pioneering work on non -Bayesian distributed learning
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai
2015-01-01
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615
Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring
Alldieck, Thiemo; Bahnsen, Chris H.; Moeslund, Thomas B.
2016-01-01
In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion. PMID:27869730
Fusion of WiFi, smartphone sensors and landmarks using the Kalman filter for indoor localization.
Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua
2015-01-05
Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m.
Fusion of WiFi, Smartphone Sensors and Landmarks Using the Kalman Filter for Indoor Localization
Chen, Zhenghua; Zou, Han; Jiang, Hao; Zhu, Qingchang; Soh, Yeng Chai; Xie, Lihua
2015-01-01
Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m. PMID:25569750
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
NASA Astrophysics Data System (ADS)
Ito, Keishiro
The primacy of a nuclear fusion reactor in a competitive energy market remarkably depends on to what extent the reactor contributes to reduce the externalities of energy. The reduction effects are classified into two effects, which have quite dissimilar characteristics. One is an effect of environmental dimensions. The other is related to energy security. In this study I took up the results of EC's Extern Eproject studies as are presentative example of the former effect. Concerning the latter effect, I clarified the fundamental characteristics of externalities related to energy security and the conceptual framework for the purpose of evaluation. In the socio-economical evaluation of research into and development investments in nuclear fusions reactors, the public will require the development of integrated evaluation systems to support the cost-effect analysis of how well the reduction effects of externalities have been integrated with the effects of technological innovation, learning, spillover, etc.
Investigation of heavy-ion fusion with deformed surface diffuseness: Actinide and lanthanide targets
NASA Astrophysics Data System (ADS)
Alavi, S. A.; Dehghani, V.
2017-05-01
By using a deformed Broglia-Winther nuclear interaction potential in the framework of the WKB method, the near- and above-barrier heavy-ion-fusion cross sections of 16O with some lanthanides and actinides have been calculated. The effect of deformed surface diffuseness on the nuclear interaction potential, the effective interaction potential at distinct angle, barrier position, barrier height, cross section at each angles, and fusion cross sections of 16O+147Sm,150Nd,154Sm , and 166Er and 16O+232Th,238U,237Np , and 248Cm have been studied. The differences between the results obtained by using deformed surface diffuseness and those obtained by using constant surface diffuseness were noticeable. Good agreement between experimental data and theoretical calculation with deformed surface diffuseness were observed for 16O+147Sm,154Sm,166Er,238U,237Np , and 248Cm reactions. It has been observed that deformed surface diffuseness plays a significant role in heavy-ion-fusion studies.
Segmentation by fusion of histogram-based k-means clusters in different color spaces.
Mignotte, Max
2008-05-01
This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.
Regular Deployment of Wireless Sensors to Achieve Connectivity and Information Coverage
Cheng, Wei; Li, Yong; Jiang, Yi; Yin, Xipeng
2016-01-01
Coverage and connectivity are two of the most critical research subjects in WSNs, while regular deterministic deployment is an important deployment strategy and results in some pattern-based lattice WSNs. Some studies of optimal regular deployment for generic values of rc/rs were shown recently. However, most of these deployments are subject to a disk sensing model, and cannot take advantage of data fusion. Meanwhile some other studies adapt detection techniques and data fusion to sensing coverage to enhance the deployment scheme. In this paper, we provide some results on optimal regular deployment patterns to achieve information coverage and connectivity as a variety of rc/rs, which are all based on data fusion by sensor collaboration, and propose a novel data fusion strategy for deployment patterns. At first the relation between variety of rc/rs and density of sensors needed to achieve information coverage and connectivity is derived in closed form for regular pattern-based lattice WSNs. Then a dual triangular pattern deployment based on our novel data fusion strategy is proposed, which can utilize collaborative data fusion more efficiently. The strip-based deployment is also extended to a new pattern to achieve information coverage and connectivity, and its characteristics are deduced in closed form. Some discussions and simulations are given to show the efficiency of all deployment patterns, including previous patterns and the proposed patterns, to help developers make more impactful WSN deployment decisions. PMID:27529246
Fatima, Iram; Fahim, Muhammad; Lee, Young-Koo; Lee, Sungyoung
2013-01-01
In recent years, activity recognition in smart homes is an active research area due to its applicability in many applications, such as assistive living and healthcare. Besides activity recognition, the information collected from smart homes has great potential for other application domains like lifestyle analysis, security and surveillance, and interaction monitoring. Therefore, discovery of users common behaviors and prediction of future actions from past behaviors become an important step towards allowing an environment to provide personalized service. In this paper, we develop a unified framework for activity recognition-based behavior analysis and action prediction. For this purpose, first we propose kernel fusion method for accurate activity recognition and then identify the significant sequential behaviors of inhabitants from recognized activities of their daily routines. Moreover, behaviors patterns are further utilized to predict the future actions from past activities. To evaluate the proposed framework, we performed experiments on two real datasets. The results show a remarkable improvement of 13.82% in the accuracy on average of recognized activities along with the extraction of significant behavioral patterns and precise activity predictions with 6.76% increase in F-measure. All this collectively help in understanding the users” actions to gain knowledge about their habits and preferences. PMID:23435057
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
An integrated approach for fusion of environmental and human health data for disease surveillance.
Burkom, Howard S; Ramac-Thomas, Liane; Babin, Steven; Holtry, Rekha; Mnatsakanyan, Zaruhi; Yund, Cynthia
2011-02-28
This paper describes the problem of public health monitoring for waterborne disease outbreaks using disparate evidence from health surveillance data streams and environmental sensors. We present a combined monitoring approach along with examples from a recent project at the Johns Hopkins University Applied Physics Laboratory in collaboration with the U.S. Environmental Protection Agency. The project objective was to build a module for the Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE) to include water quality data with health indicator data for the early detection of waterborne disease outbreaks. The basic question in the fused surveillance application is 'What is the likelihood of the public health threat of interest given recent information from available sources of evidence?' For a scientific perspective, we formulate this question in terms of the estimation of positive predictive value customary in classical epidemiology, and we present a solution framework using Bayesian Networks (BN). An overview of the BN approach presents advantages, disadvantages, and required adaptations needed for a fused surveillance capability that is scalable and robust relative to the practical data environment. In the BN project, we built a top-level health/water-quality fusion BN informed by separate waterborne-disease-related networks for the detection of water contamination and human health effects. Elements of the art of developing networks appropriate to this environment are discussed with examples. Results of applying these networks to a simulated contamination scenario are presented. Copyright © 2011 John Wiley & Sons, Ltd.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Distributed Information Fusion through Advanced Multi-Agent Control
2016-10-17
AFRL-AFOSR-JP-TR-2016-0080 Distributed Information Fusion through Advanced Multi-Agent Control Adrian Bishop NATIONAL ICT AUSTRALIA LIMITED Final...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) NATIONAL ICT AUSTRALIA LIMITED L 5 13 GARDEN ST EVELEIGH, 2015
Distributed Information Fusion through Advanced Multi-Agent Control
2016-09-09
AFRL-AFOSR-JP-TR-2016-0080 Distributed Information Fusion through Advanced Multi-Agent Control Adrian Bishop NATIONAL ICT AUSTRALIA LIMITED Final...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) NATIONAL ICT AUSTRALIA LIMITED L 5 13 GARDEN ST EVELEIGH, 2015
Panigrahi, Priyabrata; Jere, Abhay; Anamika, Krishanpal
2018-01-01
Gene fusion is a chromosomal rearrangement event which plays a significant role in cancer due to the oncogenic potential of the chimeric protein generated through fusions. At present many databases are available in public domain which provides detailed information about known gene fusion events and their functional role. Existing gene fusion detection tools, based on analysis of transcriptomics data usually report a large number of fusion genes as potential candidates, which could be either known or novel or false positives. Manual annotation of these putative genes is indeed time-consuming. We have developed a web platform FusionHub, which acts as integrated search engine interfacing various fusion gene databases and simplifies large scale annotation of fusion genes in a seamless way. In addition, FusionHub provides three ways of visualizing fusion events: circular view, domain architecture view and network view. Design of potential siRNA molecules through ensemble method is another utility integrated in FusionHub that could aid in siRNA-based targeted therapy. FusionHub is freely available at https://fusionhub.persistent.co.in.
NASA Astrophysics Data System (ADS)
Ai, Yan-Ting; Guan, Jiao-Yue; Fei, Cheng-Wei; Tian, Jing; Zhang, Feng-Ling
2017-05-01
To monitor rolling bearing operating status with casings in real time efficiently and accurately, a fusion method based on n-dimensional characteristic parameters distance (n-DCPD) was proposed for rolling bearing fault diagnosis with two types of signals including vibration signal and acoustic emission signals. The n-DCPD was investigated based on four information entropies (singular spectrum entropy in time domain, power spectrum entropy in frequency domain, wavelet space characteristic spectrum entropy and wavelet energy spectrum entropy in time-frequency domain) and the basic thought of fusion information entropy fault diagnosis method with n-DCPD was given. Through rotor simulation test rig, the vibration and acoustic emission signals of six rolling bearing faults (ball fault, inner race fault, outer race fault, inner-ball faults, inner-outer faults and normal) are collected under different operation conditions with the emphasis on the rotation speed from 800 rpm to 2000 rpm. In the light of the proposed fusion information entropy method with n-DCPD, the diagnosis of rolling bearing faults was completed. The fault diagnosis results show that the fusion entropy method holds high precision in the recognition of rolling bearing faults. The efforts of this study provide a novel and useful methodology for the fault diagnosis of an aeroengine rolling bearing.
Perdikaris, Paris; Karniadakis, George Em
2016-05-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).
Perdikaris, Paris; Karniadakis, George Em
2016-01-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481
Modeling Cyber Situational Awareness Through Data Fusion
2013-03-01
following table: Table 3.10: Example Vulnerable Hosts for Criticality Assessment Experiment Example Id OS Applications/Services Version 1 Mac OS X VLC ...linux.org/. [4] Blasch, E., I. Kadar, J. Salerno, M. Kokar, S. Das, G. Powell, D. Corkill, and E. Ruspini. “Issues and challenges of knowledge representation...Holsopple. “Issues and challenges in higher level fusion: Threat/impact assessment and intent modeling (a panel summary)”. Information Fusion (FUSION
Weber-aware weighted mutual information evaluation for infrared-visible image fusion
NASA Astrophysics Data System (ADS)
Luo, Xiaoyan; Wang, Shining; Yuan, Ding
2016-10-01
A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
Designing Image Operators for MRI-PET Image Fusion of the Brain
NASA Astrophysics Data System (ADS)
Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.
2006-09-01
Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.
Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians
NASA Astrophysics Data System (ADS)
Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von
2008-03-01
Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.
A Cyber-ITS Framework for Massive Traffic Data Analysis Using Cyber Infrastructure
Fontaine, Michael D.
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing. PMID:23766690
A Cyber-ITS framework for massive traffic data analysis using cyber infrastructure.
Xia, Yingjie; Hu, Jia; Fontaine, Michael D
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing.
Meng, Xing; Jiang, Rongtao; Lin, Dongdong; Bustillo, Juan; Jones, Thomas; Chen, Jiayu; Yu, Qingbao; Du, Yuhui; Zhang, Yu; Jiang, Tianzi; Sui, Jing; Calhoun, Vince D.
2016-01-01
Neuroimaging techniques have greatly enhanced the understanding of neurodiversity (human brain variation across individuals) in both health and disease. The ultimate goal of using brain imaging biomarkers is to perform individualized predictions. Here we proposed a generalized framework that can predict explicit values of the targeted measures by taking advantage of joint information from multiple modalities. This framework also enables whole brain voxel-wise searching by combining multivariate techniques such as ReliefF, clustering, correlation-based feature selection and multiple regression models, which is more flexible and can achieve better prediction performance than alternative atlas-based methods. For 50 healthy controls and 47 schizophrenia patients, three kinds of features derived from resting-state fMRI (fALFF), sMRI (gray matter) and DTI (fractional anisotropy) were extracted and fed into a regression model, achieving high prediction for both cognitive scores (MCCB composite r = 0.7033, MCCB social cognition r = 0.7084) and symptomatic scores (positive and negative syndrome scale [PANSS] positive r = 0.7785, PANSS negative r = 0.7804). Moreover, the brain areas likely responsible for cognitive deficits of schizophrenia, including middle temporal gyrus, dorsolateral prefrontal cortex, striatum, cuneus and cerebellum, were located with different weights, as well as regions predicting PANSS symptoms, including thalamus, striatum and inferior parietal lobule, pinpointing the potential neuromarkers. Finally, compared to a single modality, multimodal combination achieves higher prediction accuracy and enables individualized prediction on multiple clinical measures. There is more work to be done, but the current results highlight the potential utility of multimodal brain imaging biomarkers to eventually inform clinical decision-making. PMID:27177764
Usability evaluation of cloud-based mapping tools for the display of very large datasets
NASA Astrophysics Data System (ADS)
Stotz, Nicole Marie
The elasticity and on-demand nature of cloud services have made it easier to create web maps. Users only need access to a web browser and the Internet to utilize cloud based web maps, eliminating the need for specialized software. To encourage a wide variety of users, a map must be well designed; usability is a very important concept in designing a web map. Fusion Tables, a new product from Google, is one example of newer cloud-based distributed GIS services. It allows for easy spatial data manipulation and visualization, within the Google Maps framework. ESRI has also introduced a cloud based version of their software, called ArcGIS Online, built on Amazon's EC2 cloud. Utilizing a user-centered design framework, two prototype maps were created with data from the San Diego East County Economic Development Council. One map was built on Fusion Tables, and another on ESRI's ArcGIS Online. A usability analysis was conducted and used to compare both map prototypes in term so of design and functionality. Load tests were also ran, and performance metrics gathered on both map prototypes. The usability analysis was taken by 25 geography students, and consisted of time based tasks and questions on map design and functionality. Survey participants completed the time based tasks for the Fusion Tables map prototype quicker than those of the ArcGIS Online map prototype. While response was generally positive towards the design and functionality of both prototypes, overall the Fusion Tables map prototype was preferred. For the load tests, the data set was broken into 22 groups for a total of 44 tests. While the Fusion Tables map prototype performed more efficiently than the ArcGIS Online prototype, differences are almost unnoticeable. A SWOT analysis was conducted for each prototype. The results from this research point to the Fusion Tables map prototype. A redesign of this prototype would incorporate design suggestions from the usability survey, while some functionality would need to be dropped. This is a free product and would therefore be the best option if cost is an issue, but this map may not be supported in the future.
A Decision Support Framework for Genomically Informed Investigational Cancer Therapy
Johnson, Amber; Holla, Vijaykumar; Bailey, Ann Marie; Brusco, Lauren; Chen, Ken; Routbort, Mark; Patel, Keyur P.; Zeng, Jia; Kopetz, Scott; Davies, Michael A.; Piha-Paul, Sarina A.; Hong, David S.; Eterovic, Agda Karina; Tsimberidou, Apostolia M.; Broaddus, Russell; Bernstam, Elmer V.; Shaw, Kenna R.; Mendelsohn, John; Mills, Gordon B.
2015-01-01
Rapidly improving understanding of molecular oncology, emerging novel therapeutics, and increasingly available and affordable next-generation sequencing have created an opportunity for delivering genomically informed personalized cancer therapy. However, to implement genomically informed therapy requires that a clinician interpret the patient’s molecular profile, including molecular characterization of the tumor and the patient’s germline DNA. In this Commentary, we review existing data and tools for precision oncology and present a framework for reviewing the available biomedical literature on therapeutic implications of genomic alterations. Genomic alterations, including mutations, insertions/deletions, fusions, and copy number changes, need to be curated in terms of the likelihood that they alter the function of a “cancer gene” at the level of a specific variant in order to discriminate so-called “drivers” from “passengers.” Alterations that are targetable either directly or indirectly with approved or investigational therapies are potentially “actionable.” At this time, evidence linking predictive biomarkers to therapies is strong for only a few genomic markers in the context of specific cancer types. For these genomic alterations in other diseases and for other genomic alterations, the clinical data are either absent or insufficient to support routine clinical implementation of biomarker-based therapy. However, there is great interest in optimally matching patients to early-phase clinical trials. Thus, we need accessible, comprehensive, and frequently updated knowledge bases that describe genomic changes and their clinical implications, as well as continued education of clinicians and patients. PMID:25863335
NASA Astrophysics Data System (ADS)
Falco, N.; Wainwright, H. M.; Dafflon, B.; Leger, E.; Peterson, J.; Steltzer, H.; Wilmer, C.; Williams, K. H.; Hubbard, S. S.
2017-12-01
Mountainous watershed systems are characterized by extreme heterogeneity in hydrological and pedological properties that influence biotic activities, plant communities and their dynamics. To gain predictive understanding of how ecosystem and watershed system evolve under climate change, it is critical to capture such heterogeneity and to quantify the effect of key environmental variables such as topography, and soil properties. In this study, we exploit advanced geophysical and remote sensing techniques - coupled with machine learning - to better characterize and quantify the interactions between plant communities' distribution and subsurface properties. First, we have developed a remote sensing data fusion framework based on the random forest (RF) classification algorithm to estimate the spatial distribution of plant communities. The framework allows the integration of both plant spectral and structural information, which are derived from multispectral satellite images and airborne LiDAR data. We then use the RF method to evaluate the estimated plant community map, exploiting the subsurface properties (such as bedrock depth, soil moisture and other properties) and geomorphological parameters (such as slope, curvature) as predictors. Datasets include high-resolution geophysical data (electrical resistivity tomography) and LiDAR digital elevation maps. We demonstrate our approach on a mountain hillslope and meadow within the East River watershed in Colorado, which is considered to be a representative headwater catchment in the Upper Colorado Basin. The obtained results show the existence of co-evolution between above and below-ground processes; in particular, dominant shrub communities in wet and flat areas. We show that successful integration of remote sensing data with geophysical measurements allows identifying and quantifying the key environmental controls on plant communities' distribution, and provides insights into their potential changes in the future climate conditions.
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
Adolescents' perceptions of music therapy following spinal fusion surgery.
Kleiber, Charmaine; Adamek, Mary S
2013-02-01
To explore adolescents' memories about music therapy after spinal fusion surgery and their recommendations for future patients. Spinal fusion for adolescent idiopathic scoliosis is one of the most painful surgeries performed. Music therapy is shown to decrease postoperative pain in children after minor surgery. In preparation for developing a preoperative information program, we interviewed adolescents who had spinal fusion and postoperative music therapy to find out what they remembered and what they recommended for future patients. Eight adolescents who had spinal fusion for adolescent idiopathic scoliosis were interviewed about their experiences. For this qualitative study, the investigators independently used thematic analysis techniques to formulate interpretive themes. Together they discussed their ideas and assigned overall meanings to the information. The eight participants were 13-17 years of age and had surgery between 2-24 months previously. The overarching themes identified from the interviews were relaxation and pain perception, choice and control, therapist interaction and preoperative information. Participants stated that music therapy helped with mental relaxation and distraction from pain. It was important to be able to choose the type of music for the therapy and to use self-control to focus on the positive. Their recommendation was that future patients should be provided with information preoperatively about music therapy and pain management. Participants recommended a combination of auditory and visual information, especially the experiences of previous patients who had spinal fusion and music therapy. Music provided live at the bedside by a music therapist was remembered vividly and positively by most of the participants. The presence of a music therapist providing patient-selected music at the bedside is important. Methods to introduce adolescents to music therapy and how to use music for relaxation should be developed and tested. © 2012 Blackwell Publishing Ltd.
2012-10-01
education of a new generation of data fusion analysts Jacob L. Graham College of Information Sciences & Technology Pennsylvania State University...University Park, PA, U.S.A. jgraham@ist.psu.edu David L. Hall College of Information Sciences & Technology Pennsylvania State University...ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) College
Dynamic Creation of Social Networks for Syndromic Surveillance Using Information Fusion
NASA Astrophysics Data System (ADS)
Holsopple, Jared; Yang, Shanchieh; Sudit, Moises; Stotz, Adam
To enhance the effectiveness of health care, many medical institutions have started transitioning to electronic health and medical records and sharing these records between institutions. The large amount of complex and diverse data makes it difficult to identify and track relationships and trends, such as disease outbreaks, from the data points. INFERD: Information Fusion Engine for Real-Time Decision-Making is an information fusion tool that dynamically correlates and tracks event progressions. This paper presents a methodology that utilizes the efficient and flexible structure of INFERD to create social networks representing progressions of disease outbreaks. Individual symptoms are treated as features allowing multiple hypothesis being tracked and analyzed for effective and comprehensive syndromic surveillance.
Understanding the Role of Context in the Interpretation of Complex Battlespace Intelligence
2006-01-01
Level 2 Fusion)" that there remains a significant need for higher levels of information fusion such as those required for generic situation awareness ... information in a set of reports, 2) general background knowledge e.g., doctrine, techniques, practices) plus 4) known situation-specific information (e.g... aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information
Research on polarization imaging information parsing method
NASA Astrophysics Data System (ADS)
Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong
2016-11-01
Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
NASA Astrophysics Data System (ADS)
Jiang, Guodong; Fan, Ming; Li, Lihua
2016-03-01
Mammography is the gold standard for breast cancer screening, reducing mortality by about 30%. The application of a computer-aided detection (CAD) system to assist a single radiologist is important to further improve mammographic sensitivity for breast cancer detection. In this study, a design and realization of the prototype for remote diagnosis system in mammography based on cloud platform were proposed. To build this system, technologies were utilized including medical image information construction, cloud infrastructure and human-machine diagnosis model. Specifically, on one hand, web platform for remote diagnosis was established by J2EE web technology. Moreover, background design was realized through Hadoop open-source framework. On the other hand, storage system was built up with Hadoop distributed file system (HDFS) technology which enables users to easily develop and run on massive data application, and give full play to the advantages of cloud computing which is characterized by high efficiency, scalability and low cost. In addition, the CAD system was realized through MapReduce frame. The diagnosis module in this system implemented the algorithms of fusion of machine and human intelligence. Specifically, we combined results of diagnoses from doctors' experience and traditional CAD by using the man-machine intelligent fusion model based on Alpha-Integration and multi-agent algorithm. Finally, the applications on different levels of this system in the platform were also discussed. This diagnosis system will have great importance for the balanced health resource, lower medical expense and improvement of accuracy of diagnosis in basic medical institutes.
Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes.
Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro
2016-09-30
In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal "invariant features" is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a "change map", which can be accomplished by means of the CDI's informational content. For this purpose, information metrics such as the Shannon Entropy and "Specific Information" have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf's) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances.
Situational awareness of a coordinated cyber attack
NASA Astrophysics Data System (ADS)
Sudit, Moises; Stotz, Adam; Holender, Michael
2005-03-01
As technology continues to advance, services and capabilities become computerized, and an ever increasing amount of business is conducted electronically the threat of cyber attacks gets compounded by the complexity of such attacks and the criticality of the information which must be secured. A new age of virtual warfare has dawned in which seconds can differentiate between the protection of vital information and/or services and a malicious attacker attaining their goal. In this paper we present a novel approach in the real-time detection of multistage coordinated cyber attacks and the promising initial testing results we have obtained. We introduce INFERD (INformation Fusion Engine for Real-time Decision-making), an adaptable information fusion engine which performs fusion at levels zero, one, and two to provide real-time situational assessment and its application to the cyber domain in the ECCARS (Event Correlation for Cyber Attack Recognition System) system. The advantages to our approach are fourfold: (1) The complexity of the attacks which we consider, (2) the level of abstraction in which the analyst interacts with the attack scenarios, (3) the speed at which the information fusion is presented and performed, and (4) our disregard for ad-hoc rules or a priori parameters.
Framework Resources Multiply Computing Power
NASA Technical Reports Server (NTRS)
2010-01-01
As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.
Next-to-leading-order QCD corrections to Higgs boson production plus three jets in gluon fusion.
Cullen, G; van Deurzen, H; Greiner, N; Luisoni, G; Mastrolia, P; Mirabella, E; Ossola, G; Peraro, T; Tramontano, F
2013-09-27
We report on the calculation of the cross section for Higgs boson production in association with three jets via gluon fusion, at next-to-leading-order (NLO) accuracy in QCD, in the infinite top-mass approximation. After including the complete NLO QCD corrections, we observe a strong reduction in the scale dependence of the result, and an increased steepness in the transverse momentum distributions of both the Higgs boson and the leading jets. The results are obtained with the combined use of GOSAM, SHERPA, and the MADDIPOLE-MADEVENT framework.
Distributed Monte Carlo Information Fusion and Distributed Particle Filtering
2014-08-24
Distributed Monte Carlo Information Fusion and Distributed Particle Filtering Isaac L. Manuel and Adrian N. Bishop Australian National University and...2 20 + vit , (21) where vit is Gaussian white noise with a random variance. We initialised the filters with the state xi0 = 0.1 for all i ∈ V . This
INTRODUCTION: Status report on fusion research
NASA Astrophysics Data System (ADS)
Burkart, Werner
2005-10-01
A major milestone on the path to fusion energy was reached in June 2005 on the occasion of the signing of the joint declaration of all parties to the ITER negotiations, agreeing on future arrangements and on the construction site at Cadarache in France. The International Atomic Energy Agency has been promoting fusion activities since the late 1950s; it took over the auspices of the ITER Conceptual Design Activities in 1988, and of the ITER Engineering and Design Activities in 1992. The Agency continues its support to Member States through the organization of consultancies, workshops and technical meetings, the most prominent being the series of International Fusion Energy Conferences (formerly called the International Conference on Plasma Physics and Controlled Nuclear Fusion Research). The meetings serve as a platform for experts from all Member States to have open discussions on their latest accomplishments as well as on their problems and eventual solutions. The papers presented at the meetings and conferences are routinely published, many being sent to the journal it Nuclear Fusion, co-published monthly by Institute of Physics Publishing, Bristol, UK. The journal's reputation is reflected in the fact that it is a world-renowned publication, and the International Fusion Research Council has used it for the publication of a Status Report on Controlled Thermonuclear Fusion in 1978 and 1990. This present report marks the conclusion of the preparatory phases of ITER activities. It provides background information on the progress of fusion research within the last 15 years. The International Fusion Research Council (IFRC), which initiated the report, was fully aware of the complexities of including all scientific results in just one paper, and so decided to provide an overview and extensive references for the interested reader who need not necessarily be a fusion specialist. Professor Predhiman K. Kaw, Chairman, prepared the report on behalf of the IFRC, reflecting members' personal views on the latest achievements in fusion research, including magnetic and inertial confinement scenarios. The report describes fusion fundamentals and progress in fusion science and technology, with ITER as a possible partner in the realization of self-sustainable burning plasma. The importance of the socio-economic aspects of energy production using fusion power plants is also covered. Noting that applications of plasma science are of broad interest to the Member States, the report addresses the topic of plasma physics to assist in understanding the achievements of better coatings, cheaper light sources, improved heat-resistant materials and other high-technology materials. Nuclear fusion energy production is intrinsically safe, but for ITER the full range of hazards will need to be addressed, including minimising radiation exposure, to accomplish the goal of a sustainable and environmentally acceptable production of energy. We anticipate that the role of the Agency will in future evolve from supporting scientific projects and fostering information exchange to the preparation of safety principles and guidelines for the operation of burning fusion plasmas with a Q > 1. Technical progress in inertial and magnetic confinement, as well as in alternative concepts, will lead to a further increase in international cooperation. New means of communication will be needed, utilizing the best resources of modern information technology to advance interest in fusion. However, today the basis of scientific progress is still through journal publications and, with this in mind, we trust that this report will find an interested readership. We acknowledge with thanks the support of the members of the IFRC as an advisory body to the Agency. Seven chairmen have presided over the IFRC since its first meeting in 1971 in Madison, USA, ensuring that the IAEA fusion efforts were based on the best professional advice possible, and that information on fusion developments has been widely and expertly disseminated. We further acknowledge the efforts of the Chairman of the IFRC and of all authors and experts who contributed to this report on the present status of fusion research.
NASA Astrophysics Data System (ADS)
Mesbah, Mostefa; Balakrishnan, Malarvili; Colditz, Paul B.; Boashash, Boualem
2012-12-01
This article proposes a new method for newborn seizure detection that uses information extracted from both multi-channel electroencephalogram (EEG) and a single channel electrocardiogram (ECG). The aim of the study is to assess whether additional information extracted from ECG can improve the performance of seizure detectors based solely on EEG. Two different approaches were used to combine this extracted information. The first approach, known as feature fusion, involves combining features extracted from EEG and heart rate variability (HRV) into a single feature vector prior to feeding it to a classifier. The second approach, called classifier or decision fusion, is achieved by combining the independent decisions of the EEG and the HRV-based classifiers. Tested on recordings obtained from eight newborns with identified EEG seizures, the proposed neonatal seizure detection algorithms achieved 95.20% sensitivity and 88.60% specificity for the feature fusion case and 95.20% sensitivity and 94.30% specificity for the classifier fusion case. These results are considerably better than those involving classifiers using EEG only (80.90%, 86.50%) or HRV only (85.70%, 84.60%).
Data Fusion for Enhanced Aircraft Engine Prognostics and Health Management
NASA Technical Reports Server (NTRS)
Volponi, Al
2005-01-01
Aircraft gas-turbine engine data is available from a variety of sources, including on-board sensor measurements, maintenance histories, and component models. An ultimate goal of Propulsion Health Management (PHM) is to maximize the amount of meaningful information that can be extracted from disparate data sources to obtain comprehensive diagnostic and prognostic knowledge regarding the health of the engine. Data fusion is the integration of data or information from multiple sources for the achievement of improved accuracy and more specific inferences than can be obtained from the use of a single sensor alone. The basic tenet underlying the data/ information fusion concept is to leverage all available information to enhance diagnostic visibility, increase diagnostic reliability and reduce the number of diagnostic false alarms. This report describes a basic PHM data fusion architecture being developed in alignment with the NASA C-17 PHM Flight Test program. The challenge of how to maximize the meaningful information extracted from disparate data sources to obtain enhanced diagnostic and prognostic information regarding the health and condition of the engine is the primary goal of this endeavor. To address this challenge, NASA Glenn Research Center, NASA Dryden Flight Research Center, and Pratt & Whitney have formed a team with several small innovative technology companies to plan and conduct a research project in the area of data fusion, as it applies to PHM. Methodologies being developed and evaluated have been drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation, and fuzzy logic. This report will provide a chronology and summary of the work accomplished under this research contract.
Image Fusion of CT and MR with Sparse Representation in NSST Domain
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134
Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient.
Shi, Fengjian; Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua
2017-10-16
In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster-Shafer evidence theory (D-S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D-S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.
Image Fusion of CT and MR with Sparse Representation in NSST Domain.
Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren
2017-01-01
Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.
Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient
Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua
2017-01-01
In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster–Shafer evidence theory (D–S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D–S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method. PMID:29035341
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
Trust metrics in information fusion
NASA Astrophysics Data System (ADS)
Blasch, Erik
2014-05-01
Trust is an important concept for machine intelligence and is not consistent across many applications. In this paper, we seek to understand trust from a variety of factors: humans, sensors, communications, intelligence processing algorithms and human-machine displays of information. In modeling the various aspects of trust, we provide an example from machine intelligence that supports the various attributes of measuring trust such as sensor accuracy, communication timeliness, machine processing confidence, and display throughput to convey the various attributes that support user acceptance of machine intelligence results. The example used is fusing video and text whereby an analyst needs trust information in the identified imagery track. We use the proportional conflict redistribution rule as an information fusion technique that handles conflicting data from trusted and mistrusted sources. The discussion of the many forms of trust explored in the paper seeks to provide a systems-level design perspective for information fusion trust quantification.
Development of an Information Fusion System for Engine Diagnostics and Health Management
NASA Technical Reports Server (NTRS)
Volponi, Allan J.; Brotherton, Tom; Luppold, Robert; Simon, Donald L.
2004-01-01
Aircraft gas-turbine engine data are available from a variety of sources including on-board sensor measurements, maintenance histories, and component models. An ultimate goal of Propulsion Health Management (PHM) is to maximize the amount of meaningful information that can be extracted from disparate data sources to obtain comprehensive diagnostic and prognostic knowledge regarding the health of the engine. Data Fusion is the integration of data or information from multiple sources, to achieve improved accuracy and more specific inferences than can be obtained from the use of a single sensor alone. The basic tenet underlying the data/information fusion concept is to leverage all available information to enhance diagnostic visibility, increase diagnostic reliability and reduce the number of diagnostic false alarms. This paper describes a basic PHM Data Fusion architecture being developed in alignment with the NASA C17 Propulsion Health Management (PHM) Flight Test program. The challenge of how to maximize the meaningful information extracted from disparate data sources to obtain enhanced diagnostic and prognostic information regarding the health and condition of the engine is the primary goal of this endeavor. To address this challenge, NASA Glenn Research Center (GRC), NASA Dryden Flight Research Center (DFRC) and Pratt & Whitney (P&W) have formed a team with several small innovative technology companies to plan and conduct a research project in the area of data fusion as applied to PHM. Methodologies being developed and evaluated have been drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation, and fuzzy logic. This paper will provide a broad overview of this work, discuss some of the methodologies employed and give some illustrative examples.
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Improved detection probability of low level light and infrared image fusion system
NASA Astrophysics Data System (ADS)
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu
2015-02-01
To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.
Cognitive Load Measurement in a Virtual Reality-based Driving System for Autism Intervention.
Zhang, Lian; Wade, Joshua; Bian, Dayi; Fan, Jing; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2017-01-01
Autism Spectrum Disorder (ASD) is a highly prevalent neurodevelopmental disorder with enormous individual and social cost. In this paper, a novel virtual reality (VR)-based driving system was introduced to teach driving skills to adolescents with ASD. This driving system is capable of gathering eye gaze, electroencephalography, and peripheral physiology data in addition to driving performance data. The objective of this paper is to fuse multimodal information to measure cognitive load during driving such that driving tasks can be individualized for optimal skill learning. Individualization of ASD intervention is an important criterion due to the spectrum nature of the disorder. Twenty adolescents with ASD participated in our study and the data collected were used for systematic feature extraction and classification of cognitive loads based on five well-known machine learning methods. Subsequently, three information fusion schemes-feature level fusion, decision level fusion and hybrid level fusion-were explored. Results indicate that multimodal information fusion can be used to measure cognitive load with high accuracy. Such a mechanism is essential since it will allow individualization of driving skill training based on cognitive load, which will facilitate acceptance of this driving system for clinical use and eventual commercialization.
Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo
2018-06-15
This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.
Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying
2016-12-20
The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
NASA Astrophysics Data System (ADS)
Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza
2012-06-01
It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes, implementation results of the developed fusion algorithms on structural health monitoring data collected from experimental tests are reported in this paper.
NASA Astrophysics Data System (ADS)
Wen, Xueda; Matsuura, Shunji; Ryu, Shinsei
2016-06-01
We develop an approach based on edge theories to calculate the entanglement entropy and related quantities in (2+1)-dimensional topologically ordered phases. Our approach is complementary to, e.g., the existing methods using replica trick and Witten's method of surgery, and applies to a generic spatial manifold of genus g , which can be bipartitioned in an arbitrary way. The effects of fusion and braiding of Wilson lines can be also straightforwardly studied within our framework. By considering a generic superposition of states with different Wilson line configurations, through an interference effect, we can detect, by the entanglement entropy, the topological data of Chern-Simons theories, e.g., the R symbols, monodromy, and topological spins of quasiparticles. Furthermore, by using our method, we calculate other entanglement/correlation measures such as the mutual information and the entanglement negativity. In particular, it is found that the entanglement negativity of two adjacent noncontractible regions on a torus provides a simple way to distinguish Abelian and non-Abelian topological orders.
Forecasting Chronic Diseases Using Data Fusion.
Acar, Evrim; Gürdeniz, Gözde; Savorani, Francesco; Hansen, Louise; Olsen, Anja; Tjønneland, Anne; Dragsted, Lars Ove; Bro, Rasmus
2017-07-07
Data fusion, that is, extracting information through the fusion of complementary data sets, is a topic of great interest in metabolomics because analytical platforms such as liquid chromatography-mass spectrometry (LC-MS) and nuclear magnetic resonance (NMR) spectroscopy commonly used for chemical profiling of biofluids provide complementary information. In this study, with a goal of forecasting acute coronary syndrome (ACS), breast cancer, and colon cancer, we jointly analyzed LC-MS, NMR measurements of plasma samples, and the metadata corresponding to the lifestyle of participants. We used supervised data fusion based on multiple kernel learning and exploited the linearity of the models to identify significant metabolites/features for the separation of healthy referents and the cases developing a disease. We demonstrated that (i) fusing LC-MS, NMR, and metadata provided better separation of ACS cases and referents compared with individual data sets, (ii) NMR data performed the best in terms of forecasting breast cancer, while fusion degraded the performance, and (iii) neither the individual data sets nor their fusion performed well for colon cancer. Furthermore, we showed the strengths and limitations of the fusion models by discussing their performance in terms of capturing known biomarkers for smoking and coffee. While fusion may improve performance in terms of separating certain conditions by jointly analyzing metabolomics and metadata sets, it is not necessarily always the best approach as in the case of breast cancer.
Calhoun, Vince D; Sui, Jing
2016-01-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness. PMID:27347565
Energy-resolved neutron imaging for inertial confinement fusion
NASA Astrophysics Data System (ADS)
Moran, M. J.; Haan, S. W.; Hatchett, S. P.; Izumi, N.; Koch, J. A.; Lerche, R. A.; Phillips, T. W.
2003-03-01
The success of the National Ignition Facility program will depend on diagnostic measurements which study the performance of inertial confinement fusion (ICF) experiments. Neutron yield, fusion-burn time history, and images are examples of important diagnostics. Neutron and x-ray images will record the geometries of compressed targets during the fusion-burn process. Such images provide a critical test of the accuracy of numerical modeling of ICF experiments. They also can provide valuable information in cases where experiments produce unexpected results. Although x-ray and neutron images provide similar data, they do have significant differences. X-ray images represent the distribution of high-temperature regions where fusion occurs, while neutron images directly reveal the spatial distribution of fusion-neutron emission. X-ray imaging has the advantage of a relatively straightforward path to the imaging system design. Neutron imaging, by using energy-resolved detection, offers the intriguing advantage of being able to provide independent images of burning and nonburning regions of the nuclear fuel. The usefulness of energy-resolved neutron imaging depends on both the information content of the data and on the quality of the data that can be recorded. The information content will relate to the characteristic neutron spectra that are associated with emission from different regions of the source. Numerical modeling of ICF fusion burn will be required to interpret the corresponding energy-dependent images. The exercise will be useful only if the images can be recorded with sufficient definition to reveal the spatial and energy-dependent features of interest. Several options are being evaluated with respect to the feasibility of providing the desired simultaneous spatial and energy resolution.
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
Calhoun, Vince D; Sui, Jing
2016-05-01
It is becoming increasingly clear that combining multi-modal brain imaging data is able to provide more information for individual subjects by exploiting the rich multimodal information that exists. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies require broader expertise in collecting, analyzing, and interpreting the results than do unimodal studies. In this paper, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and importantly how it can help us avoid wrong conclusions and help compensate for imperfect brain imaging studies. We also discuss the challenges that need to be confronted for such approaches to be more widely applied by the community. We then provide a review of the diverse studies that have used multimodal data fusion (primarily focused on psychosis) as well as provide an introduction to some of the existing analytic approaches. Finally, we discuss some up-and-coming approaches to multi-modal fusion including deep learning and multimodal classification which show considerable promise. Our conclusion is that multimodal data fusion is rapidly growing, but it is still underutilized. The complexity of the human brain coupled with the incomplete measurement provided by existing imaging technology makes multimodal fusion essential in order to mitigate against misdirection and hopefully provide a key to finding the missing link(s) in complex mental illness.
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Research on the use of data fusion technology to evaluate the state of electromechanical equipment
NASA Astrophysics Data System (ADS)
Lin, Lin
2018-04-01
Aiming at the problems of different testing information modes and the coexistence of quantitative and qualitative information in the state evaluation of electromechanical equipment, the paper proposes the use of data fusion technology to evaluate the state of electromechanical equipment. This paper introduces the state evaluation process of mechanical and electrical equipment in detail, uses the D-S evidence theory to fuse the decision-making layers of mechanical and electrical equipment state evaluation and carries out simulation tests. The simulation results show that it is feasible and effective to apply the data fusion technology to the state evaluation of the mechatronic equipment. After the multiple decision-making information provided by different evaluation methods are fused repeatedly and the useful information is extracted repeatedly, the fuzziness of judgment can be reduced and the state evaluation Credibility.
A Technical Analysis Information Fusion Approach for Stock Price Analysis and Modeling
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
In this paper, we address the problem of technical analysis information fusion in improving stock market index-level prediction. We present an approach for analyzing stock market price behavior based on different categories of technical analysis metrics and a multiple predictive system. Each category of technical analysis measures is used to characterize stock market price movements. The presented predictive system is based on an ensemble of neural networks (NN) coupled with particle swarm intelligence for parameter optimization where each single neural network is trained with a specific category of technical analysis measures. The experimental evaluation on three international stock market indices and three individual stocks show that the presented ensemble-based technical indicators fusion system significantly improves forecasting accuracy in comparison with single NN. Also, it outperforms the classical neural network trained with index-level lagged values and NN trained with stationary wavelet transform details and approximation coefficients. As a result, technical information fusion in NN ensemble architecture helps improving prediction accuracy.
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
Pires, Ivan Miguel; Garcia, Nuno M; Pombo, Nuno; Flórez-Revuelta, Francisco; Spinsante, Susanna
2018-02-21
Sensors available on mobile devices allow the automatic identification of Activities of Daily Living (ADL). This paper describes an approach for the creation of a framework for the identification of ADL, taking into account several concepts, including data acquisition, data processing, data fusion, and pattern recognition. These concepts can be mapped onto different modules of the framework. The proposed framework should perform the identification of ADL without Internet connection, performing these tasks locally on the mobile device, taking in account the hardware and software limitations of these devices. The main purpose of this paper is to present a new approach for the creation of a framework for the recognition of ADL, analyzing the allowed sensors available in the mobile devices, and the existing methods available in the literature.
Pombo, Nuno
2018-01-01
Sensors available on mobile devices allow the automatic identification of Activities of Daily Living (ADL). This paper describes an approach for the creation of a framework for the identification of ADL, taking into account several concepts, including data acquisition, data processing, data fusion, and pattern recognition. These concepts can be mapped onto different modules of the framework. The proposed framework should perform the identification of ADL without Internet connection, performing these tasks locally on the mobile device, taking in account the hardware and software limitations of these devices. The main purpose of this paper is to present a new approach for the creation of a framework for the recognition of ADL, analyzing the allowed sensors available in the mobile devices, and the existing methods available in the literature. PMID:29466316
Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal
2016-06-01
Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; White, R. B.
2017-09-01
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. In this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that has been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Additional information from the actual experiment enables further tuning of the model’s parameters to achieve a close match with measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
Chacón, Juliana; Sousa, Aretuza; Baeza, Carlos M; Renner, Susanne S
2012-09-01
Understanding the flexibility of monocot genomes requires a phylogenetic framework, which so far is available for few of the ca. 2800 genera. Here we use a molecular tree for the South American genus Alstroemeria to place karyological information, including fluorescent in situ hybridization (FISH) signals, in an explicit evolutionary context. From a phylogeny based on plastid, nuclear, and mitochondrial sequences for most species of Alstroemeria, we selected early-branching (Chilean) and derived (Brazilian) species for which we obtained 18S-25S and 5S rDNA FISH signals; we also analyzed chromosome numbers, 1C-values, and telomere FISH signals (in two species). Chromosome counts for Alstroemeria cf. rupestris and A. pulchella confirm 2n = 16 as typical of the genus, which now has chromosomes counted for 29 of its 78 species. The rDNA sites are polymorphic both among and within species, and interstitial telomeric sites in Alstroemeria cf. rupestris suggest chromosome fusion. In spite of a constant chromosome number, closely related species of Alstroemeria differ drastically in their rDNA, indicating rapid increase, decrease, or translocations of these genes. Previously proposed Brazilian and Chilean karyotype groups are not natural, and the n = 8 chromosomes in Alstroemeria compared to n = 9 in its sister genus Bomarea may result from a Robertsonian fusion.
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; ...
2017-07-20
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
Spatial resolution enhancement of satellite image data using fusion approach
NASA Astrophysics Data System (ADS)
Lestiana, H.; Sukristiyanti
2018-02-01
Object identification using remote sensing data has a problem when the spatial resolution is not in accordance with the object. The fusion approach is one of methods to solve the problem, to improve the object recognition and to increase the objects information by combining data from multiple sensors. The application of fusion image can be used to estimate the environmental component that is needed to monitor in multiple views, such as evapotranspiration estimation, 3D ground-based characterisation, smart city application, urban environments, terrestrial mapping, and water vegetation. Based on fusion application method, the visible object in land area has been easily recognized using the method. The variety of object information in land area has increased the variation of environmental component estimation. The difficulties in recognizing the invisible object like Submarine Groundwater Discharge (SGD), especially in tropical area, might be decreased by the fusion method. The less variation of the object in the sea surface temperature is a challenge to be solved.
Direct observation of intermediate states in model membrane fusion
Keidel, Andrea; Bartsch, Tobias F.; Florin, Ernst-Ludwig
2016-01-01
We introduce a novel assay for membrane fusion of solid supported membranes on silica beads and on coverslips. Fusion of the lipid bilayers is induced by bringing an optically trapped bead in contact with the coverslip surface while observing the bead’s thermal motion with microsecond temporal and nanometer spatial resolution using a three-dimensional position detector. The probability of fusion is controlled by the membrane tension on the particle. We show that the progression of fusion can be monitored by changes in the three-dimensional position histograms of the bead and in its rate of diffusion. We were able to observe all fusion intermediates including transient fusion, formation of a stalk, hemifusion and the completion of a fusion pore. Fusion intermediates are characterized by axial but not lateral confinement of the motion of the bead and independently by the change of its rate of diffusion due to the additional drag from the stalk-like connection between the two membranes. The detailed information provided by this assay makes it ideally suited for studies of early events in pure lipid bilayer fusion or fusion assisted by fusogenic molecules. PMID:27029285
Multiscale Medical Image Fusion in Wavelet Domain
Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
Direct observation of intermediate states in model membrane fusion.
Keidel, Andrea; Bartsch, Tobias F; Florin, Ernst-Ludwig
2016-03-31
We introduce a novel assay for membrane fusion of solid supported membranes on silica beads and on coverslips. Fusion of the lipid bilayers is induced by bringing an optically trapped bead in contact with the coverslip surface while observing the bead's thermal motion with microsecond temporal and nanometer spatial resolution using a three-dimensional position detector. The probability of fusion is controlled by the membrane tension on the particle. We show that the progression of fusion can be monitored by changes in the three-dimensional position histograms of the bead and in its rate of diffusion. We were able to observe all fusion intermediates including transient fusion, formation of a stalk, hemifusion and the completion of a fusion pore. Fusion intermediates are characterized by axial but not lateral confinement of the motion of the bead and independently by the change of its rate of diffusion due to the additional drag from the stalk-like connection between the two membranes. The detailed information provided by this assay makes it ideally suited for studies of early events in pure lipid bilayer fusion or fusion assisted by fusogenic molecules.
Culturally Proficient Coaching: Supporting Educators to Create Equitable Schools
ERIC Educational Resources Information Center
Lindsey, Delores B.; Martinez, Richard S.; Lindsey, Randall B.
2006-01-01
Multicultural classrooms require a multifaceted approach to creating inclusive, learning-rich environments that empower all students. To meet this growing need, "Culturally Proficient Coaching" provides educators with a simple, yet comprehensive, new framework: a powerful fusion of the field-tested and respected Cognitive Coaching and Cultural…
Dynamics simulations for engineering macromolecular interactions
Robinson-Mosher, Avi; Shinar, Tamar; Silver, Pamela A.; Way, Jeffrey
2013-01-01
The predictable engineering of well-behaved transcriptional circuits is a central goal of synthetic biology. The artificial attachment of promoters to transcription factor genes usually results in noisy or chaotic behaviors, and such systems are unlikely to be useful in practical applications. Natural transcriptional regulation relies extensively on protein-protein interactions to insure tightly controlled behavior, but such tight control has been elusive in engineered systems. To help engineer protein-protein interactions, we have developed a molecular dynamics simulation framework that simplifies features of proteins moving by constrained Brownian motion, with the goal of performing long simulations. The behavior of a simulated protein system is determined by summation of forces that include a Brownian force, a drag force, excluded volume constraints, relative position constraints, and binding constraints that relate to experimentally determined on-rates and off-rates for chosen protein elements in a system. Proteins are abstracted as spheres. Binding surfaces are defined radially within a protein. Peptide linkers are abstracted as small protein-like spheres with rigid connections. To address whether our framework could generate useful predictions, we simulated the behavior of an engineered fusion protein consisting of two 20 000 Da proteins attached by flexible glycine/serine-type linkers. The two protein elements remained closely associated, as if constrained by a random walk in three dimensions of the peptide linker, as opposed to showing a distribution of distances expected if movement were dominated by Brownian motion of the protein domains only. We also simulated the behavior of fluorescent proteins tethered by a linker of varying length, compared the predicted Förster resonance energy transfer with previous experimental observations, and obtained a good correspondence. Finally, we simulated the binding behavior of a fusion of two ligands that could simultaneously bind to distinct cell-surface receptors, and explored the landscape of linker lengths and stiffnesses that could enhance receptor binding of one ligand when the other ligand has already bound to its receptor, thus, addressing potential mechanisms for improving targeted signal transduction proteins. These specific results have implications for the design of targeted fusion proteins and artificial transcription factors involving fusion of natural domains. More broadly, the simulation framework described here could be extended to include more detailed system features such as non-spherical protein shapes and electrostatics, without requiring detailed, computationally expensive specifications. This framework should be useful in predicting behavior of engineered protein systems including binding and dissociation reactions. PMID:23822508
A Fusion Architecture for Tracking a Group of People Using a Distributed Sensor Network
2013-07-01
Determining the composition of the group is done using several classifiers. The fusion is done at the UGS level to fuse information from all the modalities to...to classification and counting of the targets. Section III also presents the algorithms for fusion of distributed sensor data at the UGS level and...ultrasonic sensors. Determining the composition of the group is done using several classifiers. The fusion is done at the UGS level to fuse
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Future Directions for Fusion Propulsion Research at NASA
NASA Technical Reports Server (NTRS)
Adams, Robert B.; Cassibry, Jason T.
2005-01-01
Fusion propulsion is inevitable if the human race remains dedicated to exploration of the solar system. There are fundamental reasons why fusion surpasses more traditional approaches to routine crewed missions to Mars, crewed missions to the outer planets, and deep space high speed robotic missions, assuming that reduced trip times, increased payloads, and higher available power are desired. A recent series of informal discussions were held among members from government, academia, and industry concerning fusion propulsion. We compiled a sufficient set of arguments for utilizing fusion in space. .If the U.S. is to lead the effort and produce a working system in a reasonable amount of time, NASA must take the initiative, relying on, but not waiting for, DOE guidance. Arguments for fusion propulsion are presented, along with fusion enabled mission examples, fusion technology trade space, and a proposed outline for future efforts.
Effect of projectile on incomplete fusion reactions at low energies
NASA Astrophysics Data System (ADS)
Sharma, Vijay R.; Shuaib, Mohd.; Yadav, Abhishek; Singh, Pushpendra P.; Sharma, Manoj K.; Kumar, R.; Singh, Devendra P.; Singh, B. P.; Muralithar, S.; Singh, R. P.; Bhowmik, R. K.; Prasad, R.
2017-11-01
Present work deals with the experimental studies of incomplete fusion reaction dynamics at energies as low as ≈ 4 - 7 MeV/A. Excitation functions populated via complete fusion and/or incomplete fusion processes in 12C+175Lu, and 13C+169Tm systems have been measured within the framework of PACE4 code. Data of excitation function measurements on comparison with different projectile-target combinations suggest the existence of ICF even at slightly above barrier energies where complete fusion (CF) is supposed to be the sole contributor, and further demonstrates strong projectile structure dependence of ICF. The incomplete fusion strength functions for 12C+175Lu, and 13C+169Tm systems are analyzed as a function of various physical parameters at a constant vrel ≈ 0.053c. It has been found that one neutron (1n) excess projectile 13C (as compared to 12C) results in less incomplete fusion contribution due to its relatively large negative α-Q-value, hence, α Q-value seems to be a reliable parameter to understand the ICF dynamics at low energies. In order to explore the reaction modes on the basis of their entry state spin population, the spin distribution of residues populated via CF and/or ICF in 16O+159Tb system has been done using particle-γ coincidence technique. CF-α and ICF-α channels have been identified from backward (B) and forward (F) α-gated γspectra, respectively. Reaction dependent decay patterns have been observed in different α emitting channels. The CF channels are found to be fed over a broad spin range, however, ICF-α channels was observed only for high-spin states. Further, the existence of incomplete fusion at low bombarding energies indicates the possibility to populate high spin states
NASA Astrophysics Data System (ADS)
So, W. Y.; Hong, S. W.; Kim, B. T.; Udagawa, T.
2004-06-01
Within the framework of an extended optical model, simultaneous χ2 analyses are performed for elastic scattering and fusion cross-section data for 9Be + 209 Bi and 6Li + 208 Pb systems, both involving loosely bound projectiles, at near-Coulomb-barrier energies to determine the polarization potential as decomposed into direct reaction (DR) and fusion parts. We show that both DR and fusion potentials extracted from χ2 analyses separately satisfy the dispersion relation, and that the expected threshold anomaly appears in the fusion part. The DR potential turns out to be a rather smooth function of the incident energy, and has a magnitude at the strong absorption radius much larger than the fusion potential, explaining why a threshold anomaly has not been seen in optical potentials deduced from fits to the elastic-scattering data without such a decomposition. Using the extracted DR potential, we examine the effects of projectile breakup on fusion cross sections σF . The observed suppression of σF in the above-barrier region can be explained in terms of the flux loss due to breakup. However, the observed enhancement of σF in the subbarrier region cannot be understood in terms of the breakup effect. Rather, the enhancement can be related to the Q value of the neutron transfer within the systems, supporting the ideas of
Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis
Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan
2016-01-01
Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection. PMID:27649193
Qi, Shile; Calhoun, Vince D.; van Erp, Theo G. M.; Bustillo, Juan; Damaraju, Eswar; Turner, Jessica A.; Du, Yuhui; Chen, Jiayu; Yu, Qingbao; Mathalon, Daniel H.; Ford, Judith M.; Voyvodic, James; Mueller, Bryon A.; Belger, Aysenil; Ewen, Sarah Mc; Potkin, Steven G.; Preda, Adrian; Jiang, Tianzi
2017-01-01
Multimodal fusion is an effective approach to take advantage of cross-information among multiple imaging data to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. To date, there is increasing interest to uncover the neurocognitive mapping of specific behavioral measurement on enriched brain imaging data; hence, a supervised, goal-directed model that enables a priori information as a reference to guide multimodal data fusion is in need and a natural option. Here we proposed a fusion with reference model, called “multi-site canonical correlation analysis with reference plus joint independent component analysis” (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to reference information, such as cognitive scores. In a 3-way fusion simulation, the proposed method was compared with its alternatives on estimation accuracy of both target component decomposition and modality linkage detection. MCCAR+jICA outperforms others with higher precision. In human imaging data, working memory performance was utilized as a reference to investigate the covarying functional and structural brain patterns among 3 modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Interestingly, similar brain maps were identified between the two cohorts, with substantial overlap in the executive control networks in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports, while MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the potential of such results to identify potential neuromarkers for mental disorders. PMID:28708547
Gluon-fusion Higgs production in the Standard Model Effective Field Theory
NASA Astrophysics Data System (ADS)
Deutschmann, Nicolas; Duhr, Claude; Maltoni, Fabio; Vryonidou, Eleni
2017-12-01
We provide the complete set of predictions needed to achieve NLO accuracy in the Standard Model Effective Field Theory at dimension six for Higgs production in gluon fusion. In particular, we compute for the first time the contribution of the chromomagnetic operator {\\overline{Q}}_LΦ σ {q}_RG at NLO in QCD, which entails two-loop virtual and one-loop real contributions, as well as renormalisation and mixing with the Yukawa operator {Φ}^{\\dagger}Φ{\\overline{Q}}_LΦ {q}_R and the gluon-fusion operator Φ†Φ GG. Focusing on the top-quark-Higgs couplings, we consider the phenomenological impact of the NLO corrections in constraining the three relevant operators by implementing the results into the M adG raph5_ aMC@NLO frame-work. This allows us to compute total cross sections as well as to perform event generation at NLO that can be directly employed in experimental analyses.
NASA Astrophysics Data System (ADS)
Boroun, G. R.; Khanehzar, A.; Boustanchi Kashan, M.
2017-11-01
In this paper, we study the top content of nucleon by analyzing azimuthal asymmetries in lepton-nucleon deep inelastic scattering (DIS), also we search for the Higgs boson associated production channel, t\\bar{t}H, at the large hadron-electron collider (LHeC) caused by boson-gluon fusion (BGF) contribution. We use azimuthal asymmetries in {γ }* Q cross sections in terms of helicity contributions to semi-inclusive deep inelastic scattering to investigate numerical properties of the \\cos 2φ distribution. We conclude that measuring azimuthal distributions caused by intrinsic heavy quark production can directly probe heavy quarks inside nucleon. Moreover, in order to estimate the probability of producing the Higgs boson, we suggest another approach in the framework of calculating t\\bar{t} cross section in boson-gluon fusion mechanism. Finally, we can confirm that this observed massive particle is referred to Higgs boson produced by fermion loop.
The Mercury Project: A High Average Power, Gas-Cooled Laser For Inertial Fusion Energy Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayramian, A; Armstrong, P; Ault, E
Hundred-joule, kilowatt-class lasers based on diode-pumped solid-state technologies, are being developed worldwide for laser-plasma interactions and as prototypes for fusion energy drivers. The goal of the Mercury Laser Project is to develop key technologies within an architectural framework that demonstrates basic building blocks for scaling to larger multi-kilojoule systems for inertial fusion energy (IFE) applications. Mercury has requirements that include: scalability to IFE beamlines, 10 Hz repetition rate, high efficiency, and 10{sup 9} shot reliability. The Mercury laser has operated continuously for several hours at 55 J and 10 Hz with fourteen 4 x 6 cm{sup 2} ytterbium doped strontiummore » fluoroapatite (Yb:S-FAP) amplifier slabs pumped by eight 100 kW diode arrays. The 1047 nm fundamental wavelength was converted to 523 nm at 160 W average power with 73% conversion efficiency using yttrium calcium oxy-borate (YCOB).« less
CONFERENCE REPORT: Summary of the 8th IAEA Technical Meeting on Fusion Power Plant Safety
NASA Astrophysics Data System (ADS)
Girard, J. Ph.; Gulden, W.; Kolbasov, B.; Louzeiro-Malaquias, A.-J.; Petti, D.; Rodriguez-Rodrigo, L.
2008-01-01
Reports were presented covering a selection of topics on the safety of fusion power plants. These included a review on licensing studies developed for ITER site preparation surveying common and non-common issues (i.e. site dependent) as lessons to a broader approach for fusion power plant safety. Several fusion power plant models, spanning from accessible technology to more advanced-materials based concepts, were discussed. On the topic related to fusion-specific technology, safety studies were reported on different concepts of breeding blanket modules, tritium handling and auxiliary systems under normal and accident scenarios' operation. The testing of power plant relevant technology in ITER was also assessed in terms of normal operation and accident scenarios, and occupational doses and radioactive releases under these testings have been determined. Other specific safety issues for fusion have also been discussed such as availability and reliability of fusion power plants, dust and tritium inventories and component failure databases. This study reveals that the environmental impact of fusion power plants can be minimized through a proper selection of low activation materials and using recycling technology helping to reduce waste volume and potentially open the route for its reutilization for the nuclear sector or even its clearance into the commercial circuit. Computational codes for fusion safety have been presented in support of the many studies reported. The on-going work on establishing validation approaches aiming at improving the prediction capability of fusion codes has been supported by experimental results and new directions for development have been identified. Fusion standards are not available and fission experience is mostly used as the framework basis for licensing and target design for safe operation and occupational and environmental constraints. It has been argued that fusion can benefit if a specific fusion approach is implemented, in particular for materials selection which will have a large impact on waste disposal and recycling and in the real limits of radiation releases if indexed to the real impact on individuals and the environment given the differences in the types of radiation emitted by tritium when compared with the fission products. Round table sessions resulted in some common recommendations. The discussions also created the awareness of the need for a larger involvement of the IAEA in support of fusion safety standards development.
Non-ad-hoc decision rule for the Dempster-Shafer method of evidential reasoning
NASA Astrophysics Data System (ADS)
Cheaito, Ali; Lecours, Michael; Bosse, Eloi
1998-03-01
This paper is concerned with the fusion of identity information through the use of statistical analysis rooted in Dempster-Shafer theory of evidence to provide automatic identification aboard a platform. An identity information process for a baseline Multi-Source Data Fusion (MSDF) system is defined. The MSDF system is applied to information sources which include a number of radars, IFF systems, an ESM system, and a remote track source. We use a comprehensive Platform Data Base (PDB) containing all the possible identity values that the potential target may take, and we use the fuzzy logic strategies which enable the fusion of subjective attribute information from sensor and the PDB to make the derivation of target identity more quickly, more precisely, and with statistically quantifiable measures of confidence. The conventional Dempster-Shafer lacks a formal basis upon which decision can be made in the face of ambiguity. We define a non-ad hoc decision rule based on the expected utility interval for pruning the `unessential' propositions which would otherwise overload the real-time data fusion systems. An example has been selected to demonstrate the implementation of our modified Dempster-Shafer method of evidential reasoning.
Feature level fusion of hand and face biometrics
NASA Astrophysics Data System (ADS)
Ross, Arun A.; Govindarajan, Rohin
2005-03-01
Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.
Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications
NASA Astrophysics Data System (ADS)
Budzan, Sebastian; Kasprzyk, Jerzy
2016-02-01
The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.
NASA Technical Reports Server (NTRS)
Haldemann, Albert F. C.; Johnson, Jerome B.; Elphic, Richard C.; Boynton, William V.; Wetzel, John
2006-01-01
CRUX is a modular suite of geophysical and borehole instruments combined with display and decision support system (MapperDSS) tools to characterize regolith resources, surface conditions, and geotechnical properties. CRUX is a NASA-funded Technology Maturation Program effort to provide enabling technology for Lunar and Planetary Surface Operations (LPSO). The MapperDSS uses data fusion methods with CRUX instruments, and other available data and models, to provide regolith properties information needed for LPSO that cannot be determined otherwise. We demonstrate the data fusion method by showing how it might be applied to characterize the distribution and form of hydrogen using a selection of CRUX instruments: Borehole Neutron Probe and Thermal Evolved Gas Analyzer data as a function of depth help interpret Surface Neutron Probe data to generate 3D information. Secondary information from other instruments along with physical models improves the hydrogen distribution characterization, enabling information products for operational decision-making.
Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment
2012-09-01
considerable variation in how fusion centers plan for, gather requirements, select and acquire federated search tools to bridge disparate databases...centers, when considering integrating federated search tools; by evaluating the importance of the planning, requirements gathering, selection and...acquisition processes for integrating federated search tools; by acknowledging the challenges faced by some fusion centers during these integration processes
BIM and IoT: A Synopsis from GIS Perspective
NASA Astrophysics Data System (ADS)
Isikdag, U.
2015-10-01
Internet-of-Things (IoT) focuses on enabling communication between all devices, things that are existent in real life or that are virtual. Building Information Models (BIMs) and Building Information Modelling is a hype that has been the buzzword of the construction industry for last 15 years. BIMs emerged as a result of a push by the software companies, to tackle the problems of inefficient information exchange between different software and to enable true interoperability. In BIM approach most up-to-date an accurate models of a building are stored in shared central databases during the design and the construction of a project and at post-construction stages. GIS based city monitoring / city management applications require the fusion of information acquired from multiple resources, BIMs, City Models and Sensors. This paper focuses on providing a method for facilitating the GIS based fusion of information residing in digital building "Models" and information acquired from the city objects i.e. "Things". Once this information fusion is accomplished, many fields ranging from Emergency Response, Urban Surveillance, Urban Monitoring to Smart Buildings will have potential benefits.
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Application of data fusion technology based on D-S evidence theory in fire detection
NASA Astrophysics Data System (ADS)
Cai, Zhishan; Chen, Musheng
2015-12-01
Judgment and identification based on single fire characteristic parameter information in fire detection is subject to environmental disturbances, and accordingly its detection performance is limited with the increase of false positive rate and false negative rate. The compound fire detector employs information fusion technology to judge and identify multiple fire characteristic parameters in order to improve the reliability and accuracy of fire detection. The D-S evidence theory is applied to the multi-sensor data-fusion: first normalize the data from all sensors to obtain the normalized basic probability function of the fire occurrence; then conduct the fusion processing using the D-S evidence theory; finally give the judgment results. The results show that the method meets the goal of accurate fire signal identification and increases the accuracy of fire alarm, and therefore is simple and effective.
Comparison and evaluation on image fusion methods for GaoFen-1 imagery
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Zhao, Junqing; Zhang, Ling
2016-10-01
Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.
Advances in simulation of wave interactions with extended MHD phenomena
NASA Astrophysics Data System (ADS)
Batchelor, D.; Abla, G.; D'Azevedo, E.; Bateman, G.; Bernholdt, D. E.; Berry, L.; Bonoli, P.; Bramley, R.; Breslau, J.; Chance, M.; Chen, J.; Choi, M.; Elwasif, W.; Foley, S.; Fu, G.; Harvey, R.; Jaeger, E.; Jardin, S.; Jenkins, T.; Keyes, D.; Klasky, S.; Kruger, S.; Ku, L.; Lynch, V.; McCune, D.; Ramos, J.; Schissel, D.; Schnack, D.; Wright, J.
2009-07-01
The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: 1) recent improvements to the IPS, 2) application of the IPS for very high resolution simulations of ITER scenarios, 3) studies of resistive and ideal MHD stability in tokamk discharges using IPS facilities, and 4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.
NASA Astrophysics Data System (ADS)
Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua
2016-03-01
Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.
NASA Astrophysics Data System (ADS)
Blasch, Erik; Salerno, John; Kadar, Ivan; Yang, Shanchieh J.; Fenstermacher, Laurie; Endsley, Mica; Grewe, Lynne
2013-05-01
During the SPIE 2012 conference, panelists convened to discuss "Real world issues and challenges in Human Social/Cultural/Behavioral modeling with Applications to Information Fusion." Each panelist presented their current trends and issues. The panel had agreement on advanced situation modeling, working with users for situation awareness and sense-making, and HSCB context modeling in focusing research activities. Each panelist added different perspectives based on the domain of interest such as physical, cyber, and social attacks from which estimates and projections can be forecasted. Also, additional techniques were addressed such as interest graphs, network modeling, and variable length Markov Models. This paper summarizes the panelists discussions to highlight the common themes and the related contrasting approaches to the domains in which HSCB applies to information fusion applications.
NASA Astrophysics Data System (ADS)
Rimland, Jeffrey; McNeese, Michael; Hall, David
2013-05-01
Although the capability of computer-based artificial intelligence techniques for decision-making and situational awareness has seen notable improvement over the last several decades, the current state-of-the-art still falls short of creating computer systems capable of autonomously making complex decisions and judgments in many domains where data is nuanced and accountability is high. However, there is a great deal of potential for hybrid systems in which software applications augment human capabilities by focusing the analyst's attention to relevant information elements based on both a priori knowledge of the analyst's goals and the processing/correlation of a series of data streams too numerous and heterogeneous for the analyst to digest without assistance. Researchers at Penn State University are exploring ways in which an information framework influenced by Klein's (Recognition Primed Decision) RPD model, Endsley's model of situational awareness, and the Joint Directors of Laboratories (JDL) data fusion process model can be implemented through a novel combination of Complex Event Processing (CEP) and Multi-Agent Software (MAS). Though originally designed for stock market and financial applications, the high performance data-driven nature of CEP techniques provide a natural compliment to the proven capabilities of MAS systems for modeling naturalistic decision-making, performing process adjudication, and optimizing networked processing and cognition via the use of "mobile agents." This paper addresses the challenges and opportunities of such a framework for augmenting human observational capability as well as enabling the ability to perform collaborative context-aware reasoning in both human teams and hybrid human / software agent teams.
Semantic Context Detection Using Audio Event Fusion
NASA Astrophysics Data System (ADS)
Chu, Wei-Ta; Cheng, Wen-Huang; Wu, Ja-Ling
2006-12-01
Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.
2014-07-01
technology work seeks to address gaps in the management, processing, and fusion of heterogeneous (i.e., soft and hard ) information to aid human decision...and bandwidth) to exploit the vast and growing amounts of data [16], [17]. There is also a broad research program on techniques for soft and hard ...Mott, G. de Mel, and T. Pham, “Integrating hard and soft information sources for D2D using controlled natural language,” in Proc. Information Fusion
2010-12-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited SOCIAL MEDIA...DATE December 2010 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Social Media Integration into State-Operated Fusion...technologies, particularly social media, within fusion centers and local law enforcement entities could enable a more expedient exchange of information among
Feature-fused SSD: fast detection for small objects
NASA Astrophysics Data System (ADS)
Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian
2018-04-01
Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
Cherenkov neutron detector for fusion reaction and runaway electron diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheon, MunSeong, E-mail: munseong@nfri.re.kr; Kim, Junghee
2015-08-15
A Cherenkov-type neutron detector was newly developed and neutron measurement experiments were performed at Korea Superconducting Tokamak Advanced Research. It was shown that the Cherenkov neutron detector can monitor the time-resolved neutron flux from deuterium-fueled fusion plasmas. Owing to the high temporal resolution of the detector, fast behaviors of runaway electrons, such as the neutron spikes, could be observed clearly. It is expected that the Cherenkov neutron detector could be utilized to provide useful information on runaway electrons as well as fusion reaction rate in fusion plasmas.
Lunar Helium-3 and Fusion Power
NASA Technical Reports Server (NTRS)
1988-01-01
The NASA Office of Exploration sponsored the NASA Lunar Helium-3 and Fusion Power Workshop. The meeting was held to understand the potential of using He-3 from the moon for terrestrial fusion power production. It provided an overview, two parallel working sessions, a review of sessions, and discussions. The lunar mining session concluded that mining, beneficiation, separation, and return of He-3 from the moon would be possible but that a large scale operation and improved technology is required. The fusion power session concluded that: (1) that He-3 offers significant, possibly compelling, advantages over fusion of tritium, principally increased reactor life, reduced radioactive wastes, and high efficiency conversion, (2) that detailed assessment of the potential of the D/He-3 fuel cycle requires more information, and (3) D/He-3 fusion may be best for commercial purposes, although D/T fusion is more near term.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
Apostolou, N; Papazoglou, Th; Koutsouris, D
2006-01-01
Image fusion is a process of combining information from multiple sensors. It is a useful tool implemented in the treatment planning programme of Gamma Knife Radiosurgery. In this paper we evaluate advanced image fusion algorithms for Matlab platform and head images. We develop nine level grayscale image fusion methods: average, principal component analysis (PCA), discrete wavelet transform (DWT) and Laplacian, filter - subtract - decimate (FSD), contrast, gradient, morphological pyramid and a shift invariant discrete wavelet transform (SIDWT) method in Matlab platform. We test these methods qualitatively and quantitatively. The quantitative criteria we use are the Root Mean Square Error (RMSE), the Mutual Information (MI), the Standard Deviation (STD), the Entropy (H), the Difference Entropy (DH) and the Cross Entropy (CEN). The qualitative are: natural appearance, brilliance contrast, presence of complementary features and enhancement of common features. Finally we make clinically useful suggestions.
Improvement of information fusion-based audio steganalysis
NASA Astrophysics Data System (ADS)
Kraetzer, Christian; Dittmann, Jana
2010-01-01
In the paper we extend an existing information fusion based audio steganalysis approach by three different kinds of evaluations: The first evaluation addresses the so far neglected evaluations on sensor level fusion. Our results show that this fusion removes content dependability while being capable of achieving similar classification rates (especially for the considered global features) if compared to single classifiers on the three exemplarily tested audio data hiding algorithms. The second evaluation enhances the observations on fusion from considering only segmental features to combinations of segmental and global features, with the result of a reduction of the required computational complexity for testing by about two magnitudes while maintaining the same degree of accuracy. The third evaluation tries to build a basis for estimating the plausibility of the introduced steganalysis approach by measuring the sensibility of the models used in supervised classification of steganographic material against typical signal modification operations like de-noising or 128kBit/s MP3 encoding. Our results show that for some of the tested classifiers the probability of false alarms rises dramatically after such modifications.
A Markov game theoretic data fusion approach for cyber situational awareness
NASA Astrophysics Data System (ADS)
Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik
2007-04-01
This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.
An Overview of INEL Fusion Safety R&D Facilities
NASA Astrophysics Data System (ADS)
McCarthy, K. A.; Smolik, G. R.; Anderl, R. A.; Carmack, W. J.; Longhurst, G. R.
1997-06-01
The Fusion Safety Program at the Idaho National Engineering Laboratory has the lead for fusion safety work in the United States. Over the years, we have developed several experimental facilities to provide data for fusion reactor safety analyses. We now have four major experimental facilities that provide data for use in safety assessments. The Steam-Reactivity Measurement System measures hydrogen generation rates and tritium mobilization rates in high-temperature (up to 1200°C) fusion relevant materials exposed to steam. The Volatilization of Activation Product Oxides Reactor Facility provides information on mobilization and transport and chemical reactivity of fusion relevant materials at high temperature (up to 1200°C) in an oxidizing environment (air or steam). The Fusion Aerosol Source Test Facility is a scaled-up version of VAPOR. The ion-implanta-tion/thermal-desorption system is dedicated to research into processes and phenomena associated with the interaction of hydrogen isotopes with fusion materials. In this paper we describe the capabilities of these facilities.
Multimodal medical information retrieval with unsupervised rank fusion.
Mourão, André; Martins, Flávio; Magalhães, João
2015-01-01
Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.
A new evaluation method research for fusion quality of infrared and visible images
NASA Astrophysics Data System (ADS)
Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda
2017-03-01
In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.
NASA Astrophysics Data System (ADS)
McCullough, Claire L.; Novobilski, Andrew J.; Fesmire, Francis M.
2006-04-01
Faculty from the University of Tennessee at Chattanooga and the University of Tennessee College of Medicine, Chattanooga Unit, have used data mining techniques and neural networks to examine a set of fourteen features, data items, and HUMINT assessments for 2,148 emergency room patients with symptoms possibly indicative of Acute Coronary Syndrome. Specifically, the authors have generated Bayesian networks describing linkages and causality in the data, and have compared them with neural networks. The data includes objective information routinely collected during triage and the physician's initial case assessment, a HUMINT appraisal. Both the neural network and the Bayesian network were used to fuse the disparate types of information with the goal of forecasting thirty-day adverse patient outcome. This paper presents details of the methods of data fusion including both the data mining techniques and the neural network. Results are compared using Receiver Operating Characteristic curves describing the outcomes of both methods, both using only objective features and including the subjective physician's assessment. While preliminary, the results of this continuing study are significant both from the perspective of potential use of the intelligent fusion of biomedical informatics to aid the physician in prescribing treatment necessary to prevent serious adverse outcome from ACS and as a model of fusion of objective data with subjective HUMINT assessment. Possible future work includes extension of successfully demonstrated intelligent fusion methods to other medical applications, and use of decision level fusion to combine results from data mining and neural net approaches for even more accurate outcome prediction.
Electroweak Higgs boson plus three jet production at next-to-leading-order QCD.
Campanario, Francisco; Figy, Terrance M; Plätzer, Simon; Sjödahl, Malin
2013-11-22
We calculate next-to-leading order (NLO) QCD corrections to electroweak Higgs boson plus three jet production. Both vector boson fusion (VBF) and Higgs-strahlung type contributions are included along with all interferences. The calculation is implemented within the Matchbox NLO framework of the Herwig++ event generator.
Fusion Centers: Issues and Options for Congress
2008-01-18
largely financed and staffed by the states, and there is no one “model” for how a center should be structured. State and local law enforcement and...Information and Receive “Feedback” . . . . . . . . . . . . . . . . . . 69 4e . Establish a Mechanism for Fusion Centers to Have Input into the NIPF...intelligence fusion centers, particularly when networked together nationally, represent a proactive tool to be used to fight a global jihadist adversary which
Cardoso, Danon Clemes; das Graças Pompolo, Silvia; Cristiano, Maykon Passos; Tavares, Mara Garcia
2014-01-01
Among insect taxa, ants exhibit one of the most variable chromosome numbers ranging from n = 1 to n = 60. This high karyotype diversity is suggested to be correlated to ants diversification. The karyotype evolution of ants is usually understood in terms of Robertsonian rearrangements towards an increase in chromosome numbers. The ant genus Mycetophylax is a small monogynous basal Attini ant (Formicidae: Myrmicinae), endemic to sand dunes along the Brazilian coastlines. A recent taxonomic revision validates three species, Mycetophylax morschi, M. conformis and M. simplex. In this paper, we cytogenetically characterized all species that belongs to the genus and analyzed the karyotypic evolution of Mycetophylax in the context of a molecular phylogeny and ancestral character state reconstruction. M. morschi showed a polymorphic number of chromosomes, with colonies showing 2n = 26 and 2n = 30 chromosomes. M. conformis presented a diploid chromosome number of 30 chromosomes, while M. simplex showed 36 chromosomes. The probabilistic models suggest that the ancestral haploid chromosome number of Mycetophylax was 17 (Likelihood framework) or 18 (Bayesian framework). The analysis also suggested that fusions were responsible for the evolutionary reduction in chromosome numbers of M. conformis and M. morschi karyotypes whereas fission may determines the M. simplex karyotype. These results obtained show the importance of fusions in chromosome changes towards a chromosome number reduction in Formicidae and how a phylogenetic background can be used to reconstruct hypotheses about chromosomes evolution.
Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes
Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro
2016-01-01
In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal “invariant features” is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a “change map”, which can be accomplished by means of the CDI’s informational content. For this purpose, information metrics such as the Shannon Entropy and “Specific Information” have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf’s) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances. PMID:27706048
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
Figueira, T. N.; Palermo, L. M.; Veiga, A. S.; Huey, D.; Alabi, C. A.; Santos, N. C.; Welsch, J. C.; Mathieu, C.; Niewiesk, S.; Moscona, A.
2016-01-01
ABSTRACT Measles virus (MV) infection is undergoing resurgence and remains one of the leading causes of death among young children worldwide despite the availability of an effective measles vaccine. MV infects its target cells by coordinated action of the MV hemagglutinin (H) and fusion (F) envelope glycoproteins; upon receptor engagement by H, the prefusion F undergoes a structural transition, extending and inserting into the target cell membrane and then refolding into a postfusion structure that fuses the viral and cell membranes. By interfering with this structural transition of F, peptides derived from the heptad repeat (HR) regions of F can inhibit MV infection at the entry stage. In previous work, we have generated potent MV fusion inhibitors by dimerizing the F-derived peptides and conjugating them to cholesterol. We have shown that prophylactic intranasal administration of our lead fusion inhibitor efficiently protects from MV infection in vivo. We show here that peptides tagged with lipophilic moieties self-assemble into nanoparticles until they reach the target cells, where they are integrated into cell membranes. The self-assembly feature enhances biodistribution and the half-life of the peptides, while integration into the target cell membrane increases fusion inhibitor potency. These factors together modulate in vivo efficacy. The results suggest a new framework for developing effective fusion inhibitory peptides. IMPORTANCE Measles virus (MV) infection causes an acute illness that may be associated with infection of the central nervous system (CNS) and severe neurological disease. No specific treatment is available. We have shown that fusion-inhibitory peptides delivered intranasally provide effective prophylaxis against MV infection. We show here that specific biophysical properties regulate the in vivo efficacy of MV F-derived peptides. PMID:27733647
Figueira, T N; Palermo, L M; Veiga, A S; Huey, D; Alabi, C A; Santos, N C; Welsch, J C; Mathieu, C; Horvat, B; Niewiesk, S; Moscona, A; Castanho, M A R B; Porotto, M
2017-01-01
Measles virus (MV) infection is undergoing resurgence and remains one of the leading causes of death among young children worldwide despite the availability of an effective measles vaccine. MV infects its target cells by coordinated action of the MV hemagglutinin (H) and fusion (F) envelope glycoproteins; upon receptor engagement by H, the prefusion F undergoes a structural transition, extending and inserting into the target cell membrane and then refolding into a postfusion structure that fuses the viral and cell membranes. By interfering with this structural transition of F, peptides derived from the heptad repeat (HR) regions of F can inhibit MV infection at the entry stage. In previous work, we have generated potent MV fusion inhibitors by dimerizing the F-derived peptides and conjugating them to cholesterol. We have shown that prophylactic intranasal administration of our lead fusion inhibitor efficiently protects from MV infection in vivo We show here that peptides tagged with lipophilic moieties self-assemble into nanoparticles until they reach the target cells, where they are integrated into cell membranes. The self-assembly feature enhances biodistribution and the half-life of the peptides, while integration into the target cell membrane increases fusion inhibitor potency. These factors together modulate in vivo efficacy. The results suggest a new framework for developing effective fusion inhibitory peptides. Measles virus (MV) infection causes an acute illness that may be associated with infection of the central nervous system (CNS) and severe neurological disease. No specific treatment is available. We have shown that fusion-inhibitory peptides delivered intranasally provide effective prophylaxis against MV infection. We show here that specific biophysical properties regulate the in vivo efficacy of MV F-derived peptides. Copyright © 2016 American Society for Microbiology.
Biomechanical study of tarsometatarsal joint fusion using finite element analysis.
Wang, Yan; Li, Zengyong; Zhang, Ming
2014-11-01
Complications of surgeries in foot and ankle bring patients with severe sufferings. Sufficient understanding of the internal biomechanical information such as stress distribution, contact pressure, and deformation is critical to estimate the effectiveness of surgical treatments and avoid complications. Foot and ankle is an intricate and synergetic system, and localized intervention may alter the functions to the adjacent components. The aim of this study was to estimate biomechanical effects of the TMT joint fusion using comprehensive finite element (FE) analysis. A foot and ankle model consists of 28 bones, 72 ligaments, and plantar fascia with soft tissues embracing all the segments. Kinematic information and ground reaction force during gait were obtained from motion analysis. Three gait instants namely the first peak, second peak and mid-stance were simulated in a normal foot and a foot with TMT joint fusion. It was found that contact pressure on plantar foot increased by 0.42%, 19% and 37%, respectively after TMT fusion compared with normal foot walking. Navico-cuneiform and fifth meta-cuboid joints sustained 27% and 40% increase in contact pressure at second peak, implying potential risk of joint problems such as arthritis. Von Mises stress in the second metatarsal bone increased by 22% at midstance, making it susceptible to stress fracture. This study provides biomechanical information for understanding the possible consequences of TMT joint fusion. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Information fusion for diabetic retinopathy CAD in digital color fundus photographs.
Niemeijer, Meindert; Abramoff, Michael D; van Ginneken, Bram
2009-05-01
The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15,000 exams (60,000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs.
Analytical method for thermal stress analysis of plasma facing materials
NASA Astrophysics Data System (ADS)
You, J. H.; Bolt, H.
2001-10-01
The thermo-mechanical response of plasma facing materials (PFMs) to heat loads from the fusion plasma is one of the crucial issues in fusion technology. In this work, a fully analytical description of the thermal stress distribution in armour tiles of plasma facing components is presented which is expected to occur under typical high heat flux (HHF) loads. The method of stress superposition is applied considering the temperature gradient and thermal expansion mismatch. Several combinations of PFMs and heat sink metals are analysed and compared. In the framework of the present theoretical model, plastic flow and the effect of residual stress can be quantitatively assessed. Possible failure features are discussed.
NASA Astrophysics Data System (ADS)
Emter, Thomas; Petereit, Janko
2014-05-01
An integrated multi-sensor fusion framework for localization and mapping for autonomous navigation in unstructured outdoor environments based on extended Kalman filters (EKF) is presented. The sensors for localization include an inertial measurement unit, a GPS, a fiber optic gyroscope, and wheel odometry. Additionally a 3D LIDAR is used for simultaneous localization and mapping (SLAM). A 3D map is built while concurrently a localization in a so far established 2D map is estimated with the current scan of the LIDAR. Despite of longer run-time of the SLAM algorithm compared to the EKF update, a high update rate is still guaranteed by sophisticatedly joining and synchronizing two parallel localization estimators.
Dysfunctional Metacognitive Beliefs in Body Dysmorphic Disorder
Zeinodini, Zahra; Sedighi, Sahar; Rahimi, Mandana Baghertork; Noorbakhsh, Simasadat; Esfahani, Sepideh Rajezi
2016-01-01
The present study aims to examine the correlation of body dysmorphic disorder, with metacognitive subscales, metaworry and thought-fusion. The study was conducted in a correlation framework. Sample included 155 high school students in Isfahan, Iran in 2013-2014, gathered through convenience sampling. To gather data about BDD, Yale-Brown Obsessive Compulsive Scale Modified for BDD was applied. Then, Meta Cognitive Questionnaire, Metaworry Questionnaire, and Thought-Fusion Inventory were used to assess metacognitive subscales, metaworry and thought-fusion. Data obtained from this study were analyzed using Pearson correlation and multiple regressions in SPSS 18. Result indicated YBOCS-BDD scores had a significant correlation with scores from MCQ (P<0.05), MWG (P<0.05), and TFI (P<0.05). Also, multiple regressions were run to predict YBOCS from TFI, MWQ, and MCQ-30. These variables significantly predicted YBOCS [F (3,151) =32.393, R2=0.57]. Findings indicated that body dysmorphic disorder was significantly related to metacognitive subscales, metaworry, and thought fusion in high school students in Isfahan, which is in line with previous studies. A deeper understanding of these processes can broaden theory and treatment of BDD, thereby improve the lives of sufferers and potentially protect others from developing this devastating disorder. PMID:26493420
Young, Patricia A; Morrison, Sherie L; Timmerman, John M
2014-10-01
The true potential of cytokine therapies in cancer treatment is limited by the inability to deliver optimal concentrations into tumor sites due to dose-limiting systemic toxicities. To maximize the efficacy of cytokine therapy, recombinant antibody-cytokine fusion proteins have been constructed by a number of groups to harness the tumor-targeting ability of monoclonal antibodies. The aim is to guide cytokines specifically to tumor sites where they might stimulate more optimal anti-tumor immune responses while avoiding the systemic toxicities of free cytokine therapy. Antibody-cytokine fusion proteins containing interleukin (IL)-2, IL-12, IL-21, tumor necrosis factor (TNF)α, and interferons (IFNs) α, β, and γ have been constructed and have shown anti-tumor activity in preclinical and early-phase clinical studies. Future priorities for development of this technology include optimization of tumor targeting, bioactivity of the fused cytokine, and choice of appropriate agents for combination therapies. This review is intended to serve as a framework for engineering an ideal antibody-cytokine fusion protein, focusing on previously developed constructs and their clinical trial results. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
McCune, Matthew; Kosztin, Ioan
2013-03-01
Cellular Particle Dynamics (CPD) is a theoretical-computational-experimental framework for describing and predicting the time evolution of biomechanical relaxation processes of multi-cellular systems, such as fusion, sorting and compression. In CPD, cells are modeled as an ensemble of cellular particles (CPs) that interact via short range contact interactions, characterized by an attractive (adhesive interaction) and a repulsive (excluded volume interaction) component. The time evolution of the spatial conformation of the multicellular system is determined by following the trajectories of all CPs through numerical integration of their equations of motion. Here we present CPD simulation results for the fusion of both spherical and cylindrical multi-cellular aggregates. First, we calibrate the relevant CPD model parameters for a given cell type by comparing the CPD simulation results for the fusion of two spherical aggregates to the corresponding experimental results. Next, CPD simulations are used to predict the time evolution of the fusion of cylindrical aggregates. The latter is relevant for the formation of tubular multi-cellular structures (i.e., primitive blood vessels) created by the novel bioprinting technology. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.
PI(4,5)P2 regulates myoblast fusion through Arp2/3 regulator localization at the fusion site
Bothe, Ingo; Deng, Su; Baylies, Mary
2014-01-01
Cell-cell fusion is a regulated process that requires merging of the opposing membranes and underlying cytoskeletons. However, the integration between membrane and cytoskeleton signaling during fusion is not known. Using Drosophila, we demonstrate that the membrane phosphoinositide PI(4,5)P2 is a crucial regulator of F-actin dynamics during myoblast fusion. PI(4,5)P2 is locally enriched and colocalizes spatially and temporally with the F-actin focus that defines the fusion site. PI(4,5)P2 enrichment depends on receptor engagement but is upstream or parallel to actin remodeling. Regulators of actin branching via Arp2/3 colocalize with PI(4,5)P2 in vivo and bind PI(4,5)P2 in vitro. Manipulation of PI(4,5)P2 availability leads to impaired fusion, with a reduction in the F-actin focus size and altered focus morphology. Mechanistically, the changes in the actin focus are due to a failure in the enrichment of actin regulators at the fusion site. Moreover, improper localization of these regulators hinders expansion of the fusion interface. Thus, PI(4,5)P2 enrichment at the fusion site encodes spatial and temporal information that regulates fusion progression through the localization of activators of actin polymerization. PMID:24821989
Minimum energy information fusion in sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapline, G
1999-05-11
In this paper we consider how to organize the sharing of information in a distributed network of sensors and data processors so as to provide explanations for sensor readings with minimal expenditure of energy. We point out that the Minimum Description Length principle provides an approach to information fusion that is more naturally suited to energy minimization than traditional Bayesian approaches. In addition we show that for networks consisting of a large number of identical sensors Kohonen self-organization provides an exact solution to the problem of combing the sensor outputs into minimal description length explanations.
NASA Astrophysics Data System (ADS)
Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.
2016-04-01
The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.
Multisource information fusion applied to ship identification for the recognized maritime picture
NASA Astrophysics Data System (ADS)
Simard, Marc-Alain; Lefebvre, Eric; Helleur, Christopher
2000-04-01
The Recognized Maritime Picture (RMP) is defined as a composite picture of activity over a maritime area of interest. In simplistic terms, building an RAMP comes down to finding if an object of interest, a ship in our case, is there or not, determining what it is, determining what it is doing and determining if some type of follow-on action is required. The Canadian Department of National Defence currently has access to or may, in the near future, have access to a number of civilians, military and allied information or sensor systems to accomplish these purposes. These systems include automatic self-reporting positional systems, air patrol surveillance systems, high frequency surface radars, electronic intelligence systems, radar space systems and high frequency direction finding sensors. The ability to make full use of these systems is limited by the existing capability to fuse data from all sources in a timely, accurate and complete manner. This paper presents an information fusion systems under development that correlates and fuses these information and sensor data sources. This fusion system, named Adaptive Fuzzy Logic Correlator, correlates the information in batch but fuses and constructs ship tracks sequentially. It applies standard Kalman filter techniques and fuzzy logic correlation techniques. We propose a set of recommendations that should improve the ship identification process. Particularly it is proposed to utilize as many non-redundant sources of information as possible that address specific vessel attributes. Another important recommendation states that the information fusion and data association techniques should be capable of dealing with incomplete and imprecise information. Some fuzzy logic techniques capable of tolerating imprecise and dissimilar data are proposed.
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
Warfighter decision making performance analysis as an investment priority driver
NASA Astrophysics Data System (ADS)
Thornley, David J.; Dean, David F.; Kirk, James C.
2010-04-01
Estimating the relative value of alternative tactics, techniques and procedures (TTP) and information systems requires measures of the costs and benefits of each, and methods for combining and comparing those measures. The NATO Code of Best Practice for Command and Control Assessment explains that decision making quality would ideally be best assessed on outcomes. Lessons learned in practice can be assessed statistically to support this, but experimentation with alternate measures in live conflict is undesirable. To this end, the development of practical experimentation to parameterize effective constructive simulation and analytic modelling for system utility prediction is desirable. The Land Battlespace Systems Department of Dstl has modeled human development of situational awareness to support constructive simulation by empirically discovering how evidence is weighed according to circumstance, personality, training and briefing. The human decision maker (DM) provides the backbone of the information processing activity associated with military engagements because of inherent uncertainty associated with combat operations. To develop methods for representing the process in order to assess equipment and non-technological interventions such as training and TTPs we are developing componentized or modularized timed analytic stochastic model components and instruments as part of a framework to support quantitative assessment of intelligence production and consumption methods in a human decision maker-centric mission space. In this paper, we formulate an abstraction of the human intelligence fusion process from the Defence Science and Technology Laboratory's (Dstl's) INCIDER model to include in our framework, and synthesize relevant cost and benefit characteristics.
NASA Astrophysics Data System (ADS)
Zajicek, J.; Burian, M.; Soukup, P.; Novak, V.; Macko, M.; Jakubek, J.
2017-01-01
Multimodal medical imaging based on Magnetic Resonance is mainly combinated with one of the scintigraphic method like PET or SPECT. These methods provide functional information whereas magnetic resonance imaging provides high spatial resolution of anatomical information or complementary functional information. Fusion of imaging modalities allows researchers to obtain complimentary information in a single measurement. The combination of MRI with SPECT is still relatively new and challenging in many ways. The main complication of using SPECT in MRI systems is the presence of a high magnetic field therefore (ferro)magnetic materials have to be eliminated. Furthermore the application of radiofrequency fields within the MR gantry does not allow for the use of conductive structures such as the common heavy metal collimators. This work presents design and construction of an experimental MRI-SPECT insert system and its initial tests. This unique insert system consists of an MR-compatible SPECT setup with CdTe pixelated sensors Timepix tungsten collimators and a radiofrequency coil. Measurements were performed on a gelatine and tissue phantom with an embedded radioisotopic source (57Co 122 keV γ ray) inside the RF coil by the Bruker BioSpec 47/20 (4.7 T) MR animal scanner. The project was performed in the framework of the Medipix Collaboration.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
Auditory-visual fusion in speech perception in children with cochlear implants
Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.
2005-01-01
Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
The roosting spatial network of a bird-predator bat.
Fortuna, Miguel A; Popa-Lisseanu, Ana G; Ibáñez, Carlos; Bascompte, Jordi
2009-04-01
The use of roosting sites by animal societies is important in conservation biology, animal behavior, and epidemiology. The giant noctule bat (Nyctalus lasiopterus) constitutes fission-fusion societies whose members spread every day in multiple trees for shelter. To assess how the pattern of roosting use determines the potential for information exchange or disease spreading, we applied the framework of complex networks. We found a social and spatial segregation of the population in well-defined modules or compartments, formed by groups of bats sharing the same trees. Inside each module, we revealed an asymmetric use of trees by bats representative of a nested pattern. By applying a simple epidemiological model, we show that there is a strong correlation between network structure and the rate and shape of infection dynamics. This modular structure slows down the spread of diseases and the exchange of information through the entire network. The implication for management is complex, affecting differently the cohesion inside and among colonies and the transmission of parasites and diseases. Network analysis can hence be applied to quantifying the conservation status of individual trees used by species depending on hollows for shelter.
Intelligent on-board system for driving assistance
NASA Astrophysics Data System (ADS)
Rombaut, Michele; Le Fort-Piat, N.
1995-09-01
We present in this paper, an electronic copilot embedded in a real car. The system objective is to help the driver by sending alarms or warnings in order to avoid dangerous situtations. An onboard perception system based on CCD cameras and proprioceptive sensors is used ot provide information concerning the environment and the internal state of the vehicle. From this set of information, the copilot is able to analyze the situation and to generate adequate warnings to the driver according to the circumstances. The definition and the development of such a system deal with multisensor data fusion and supervision strategies. The framework of this work was the European Prometheus Pro-Art program. The electronic copilot has been integrated in a prototype vehicle called Prolab2. This French demonstrator integrates the works of nine research laboratories and two car companies: PSA and RENAULT. After a brief presentation of the global demonstrator, we present the two principal parts developed in our laboratory corresponding to the high level modules of the system: the dynamic data manager and the situation supervision.
A fusion network for semantic segmentation using RGB-D data
NASA Astrophysics Data System (ADS)
Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu
2018-04-01
Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.
Cognitive Load Measurement in a Virtual Reality-based Driving System for Autism Intervention
Zhang, Lian; Wade, Joshua; Bian, Dayi; Fan, Jing; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan
2016-01-01
Autism Spectrum Disorder (ASD) is a highly prevalent neurodevelopmental disorder with enormous individual and social cost. In this paper, a novel virtual reality (VR)-based driving system was introduced to teach driving skills to adolescents with ASD. This driving system is capable of gathering eye gaze, electroencephalography, and peripheral physiology data in addition to driving performance data. The objective of this paper is to fuse multimodal information to measure cognitive load during driving such that driving tasks can be individualized for optimal skill learning. Individualization of ASD intervention is an important criterion due to the spectrum nature of the disorder. Twenty adolescents with ASD participated in our study and the data collected were used for systematic feature extraction and classification of cognitive loads based on five well-known machine learning methods. Subsequently, three information fusion schemes—feature level fusion, decision level fusion and hybrid level fusion—were explored. Results indicate that multimodal information fusion can be used to measure cognitive load with high accuracy. Such a mechanism is essential since it will allow individualization of driving skill training based on cognitive load, which will facilitate acceptance of this driving system for clinical use and eventual commercialization. PMID:28966730
Quantitative multi-modal NDT data analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heideklang, René; Shokouhi, Parisa
2014-02-18
A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundantmore » information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.« less
Assessing Online Patient Education Readability for Spine Surgery Procedures.
Long, William W; Modi, Krishna D; Haws, Brittany E; Khechen, Benjamin; Massel, Dustin H; Mayo, Benjamin C; Singh, Kern
2018-03-01
Increased patient reliance on Internet-based health information has amplified the need for comprehensible online patient education articles. As suggested by the American Medical Association and National Institute of Health, spine fusion articles should be written for a 4th-6th-grade reading level to increase patient comprehension, which may improve postoperative outcomes. The purpose of this study is to determine the readability of online health care education information relating to anterior cervical discectomy and fusion (ACDF) and lumbar fusion procedures. Online health-education resource qualitative analysis. Three search engines were utilized to access patient education articles for common cervical and lumbar spine procedures. Relevant articles were analyzed for readability using Readability Studio Professional Edition software (Oleander Software Ltd). Articles were stratified by organization type as follows: General Medical Websites (GMW), Healthcare Network/Academic Institutions (HNAI), and Private Practices (PP). Thirteen common readability tests were performed with the mean readability of each compared between subgroups using analysis of variance. ACDF and lumbar fusion articles were determined to have a mean readability of 10.7±1.5 and 11.3±1.6, respectively. GMW, HNAI, and PP subgroups had a mean readability of 10.9±2.9, 10.7±2.8, and 10.7±2.5 for ACDF and 10.9±3.0, 10.8±2.9, and 11.6±2.7 for lumbar fusion articles. Of 310 total articles, only 6 (3 ACDF and 3 lumbar fusion) were written for comprehension below a 7th-grade reading level. Current online literature from medical websites containing information regarding ACDF and lumbar fusion procedures are written at a grade level higher than the suggested guidelines. Therefore, current patient education articles should be revised to accommodate the average reading level in the United States and may result in improved patient comprehension and postoperative outcomes.
Web Audio/Video Streaming Tool
NASA Technical Reports Server (NTRS)
Guruvadoo, Eranna K.
2003-01-01
In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.