Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
2010-07-01
Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania State University (PSU), Iona College (Iona), and Tennessee State...License. 14. ABSTRACT The University at Buffalo (UB) Center for Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania...of CMIF current research on methods for Test and Evaluation ([7], [8]) involving for example large- factor-space experimental design techniques ([9
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline
Zhang, Jie; Li, Qingyang; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin
2017-01-01
Alzheimer’s Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms. PMID:28943731
Runtime Simulation for Post-Disaster Data Fusion Visualization
2006-10-01
Center for Multisource Information Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA kesh@eng.buffalo.edu ABSTRACT...Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Unified Research on Network-Based Hard/Soft Information Fusion
2016-02-02
types). There are a number of search tree run parameters which must be set depending on the experimental setting. A pilot study was run to identify...Unlimited Final Report: Unified Research on Network-Based Hard/Soft Information Fusion The views, opinions and/or findings contained in this report...Final Report: Unified Research on Network-Based Hard/Soft Information Fusion Report Title The University at Buffalo (UB) Center for Multisource
FuzzyFusion: an application architecture for multisource information fusion
NASA Astrophysics Data System (ADS)
Fox, Kevin L.; Henning, Ronda R.
2009-04-01
The correlation of information from disparate sources has long been an issue in data fusion research. Traditional data fusion addresses the correlation of information from sources as diverse as single-purpose sensors to all-source multi-media information. Information system vulnerability information is similar in its diversity of sources and content, and in the desire to draw a meaningful conclusion, namely, the security posture of the system under inspection. FuzzyFusionTM, A data fusion model that is being applied to the computer network operations domain is presented. This model has been successfully prototyped in an applied research environment and represents a next generation assurance tool for system and network security.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Fan, Yuanjie; Yin, Yuehong
2013-12-01
Although exoskeletons have received enormous attention and have been widely used in gait training and walking assistance in recent years, few reports addressed their application during early poststroke rehabilitation. This paper presents a healthcare technology for active and progressive early rehabilitation using multisource information fusion from surface electromyography and force-position extended physiological proprioception. The active-compliance control based on interaction force between patient and exoskeleton is applied to accelerate the recovery of the neuromuscular function, whereby progressive treatment through timely evaluation contributes to an effective and appropriate physical rehabilitation. Moreover, a clinic-oriented rehabilitation system, wherein a lower extremity exoskeleton with active compliance is mounted on a standing bed, is designed to ensure comfortable and secure rehabilitation according to the structure and control requirements. Preliminary experiments and clinical trial demonstrate valuable information on the feasibility, safety, and effectiveness of the progressive exoskeleton-assisted training.
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Distributed Fusion in Sensor Networks with Information Genealogy
2011-06-28
image processing [2], acoustic and speech recognition [3], multitarget tracking [4], distributed fusion [5], and Bayesian inference [6-7]. For...Adaptation for Distant-Talking Speech Recognition." in Proc Acoustics. Speech , and Signal Processing, 2004 |4| Y Bar-Shalom and T 1-. Fortmann...used in speech recognition and other classification applications [8]. But their use in underwater mine classification is limited. In this paper, we
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
Multisource geological data mining and its utilization of uranium resources exploration
NASA Astrophysics Data System (ADS)
Zhang, Jie-lin
2009-10-01
Nuclear energy as one of clear energy sources takes important role in economic development in CHINA, and according to the national long term development strategy, many more nuclear powers will be built in next few years, so it is a great challenge for uranium resources exploration. Research and practice on mineral exploration demonstrates that utilizing the modern Earth Observe System (EOS) technology and developing new multi-source geological data mining methods are effective approaches to uranium deposits prospecting. Based on data mining and knowledge discovery technology, this paper uses multi-source geological data to character electromagnetic spectral, geophysical and spatial information of uranium mineralization factors, and provides the technical support for uranium prospecting integrating with field remote sensing geological survey. Multi-source geological data used in this paper include satellite hyperspectral image (Hyperion), high spatial resolution remote sensing data, uranium geological information, airborne radiometric data, aeromagnetic and gravity data, and related data mining methods have been developed, such as data fusion of optical data and Radarsat image, information integration of remote sensing and geophysical data, and so on. Based on above approaches, the multi-geoscience information of uranium mineralization factors including complex polystage rock mass, mineralization controlling faults and hydrothermal alterations have been identified, the metallogenic potential of uranium has been evaluated, and some predicting areas have been located.
Generalized information fusion and visualization using spatial voting and data modeling
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.
2013-05-01
We present a novel and innovative information fusion and visualization framework for multi-source intelligence (multiINT) data using Spatial Voting (SV) and Data Modeling. We describe how different sources of information can be converted into numerical form for further processing downstream, followed by a short description of how this information can be fused using the SV grid. As an illustrative example, we show the modeling of cyberspace as cyber layers for the purpose of tracking cyber personas. Finally we describe a path ahead for creating interactive agile networks through defender customized Cyber-cubes for network configuration and attack visualization.
The Multi-energy High precision Data Processor Based on AD7606
NASA Astrophysics Data System (ADS)
Zhao, Chen; Zhang, Yanchi; Xie, Da
2017-11-01
This paper designs an information collector based on AD7606 to realize the high-precision simultaneous acquisition of multi-source information of multi-energy systems to form the information platform of the energy Internet at Laogang with electricty as its major energy source. Combined with information fusion technologies, this paper analyzes the data to improve the overall energy system scheduling capability and reliability.
Foundational Technologies for Activity-Based Intelligence - A Review of the Literature
2014-02-01
academic community. The Center for Multisource Information Fusion ( CMIF ) at the University at Buffalo, Harvard University, and the University of...depth of researchers conducting high-value Multi-INT research; these efforts 26 are delivering high-value research outcomes, e.g., [46-47]. CMIF
Research on precise modeling of buildings based on multi-source data fusion of air to ground
NASA Astrophysics Data System (ADS)
Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong
2016-03-01
Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
A research on the positioning technology of vehicle navigation system from single source to "ASPN"
NASA Astrophysics Data System (ADS)
Zhang, Jing; Li, Haizhou; Chen, Yu; Chen, Hongyue; Sun, Qian
2017-10-01
Due to the suddenness and complexity of modern warfare, land-based weapon systems need to have precision strike capability on roads and railways. The vehicle navigation system is one of the most important equipments for the land-based weapon systems that have precision strick capability. There are inherent shortcomings for single source navigation systems to provide continuous and stable navigation information. To overcome the shortcomings, the multi-source positioning technology is developed. The All Source Positioning and Navigaiton (ASPN) program was proposed in 2010, which seeks to enable low cost, robust, and seamless navigation solutions for military to use on any operational platform and in any environment with or without GPS. The development trend of vehicle positioning technology was reviewed in this paper. The trend indicates that the positioning technology is developed from single source and multi-source to ASPN. The data fusion techniques based on multi-source and ASPN was analyzed in detail.
Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai
2015-01-01
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615
Revisions to the JDL data fusion model
NASA Astrophysics Data System (ADS)
Steinberg, Alan N.; Bowman, Christopher L.; White, Franklin E.
1999-03-01
The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.
Multi-source remotely sensed data fusion for improving land cover classification
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Bo; Xu, Bing
2017-02-01
Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.
Detecting misinformation and knowledge conflicts in relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Jackobsen, Matthew; Riordan, Brian
2014-06-01
Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge extracted from various sources, including entities, relations, and events, intelligence analysts respond to commander's information requests, integrate facts into summaries about current situations, augment existing knowledge with inferred information, make predictions about the future, and develop action plans. However, information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source data processing, where there is much greater presence of intentional misinformation. In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using soft co-reference between entities and events in multiple sources. The conflict detection process maintains the uncertainty throughout all phases, providing full traceability and enabling incremental updates of the detection results as new knowledge or modification to previously analyzed information are obtained. Detected conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset, our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82% recall and 90% precision in realistic noise situation (15% of missing attributes).
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth's resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.
Fusion or confusion: knowledge or nonsense?
NASA Astrophysics Data System (ADS)
Rothman, Peter L.; Denton, Richard V.
1991-08-01
The terms 'data fusion,' 'sensor fusion,' multi-sensor integration,' and 'multi-source integration' have been used widely in the technical literature to refer to a variety of techniques, technologies, systems, and applications which employ and/or combine data derived from multiple information sources. Applications of data fusion range from real-time fusion of sensor information for the navigation of mobile robots to the off-line fusion of both human and technical strategic intelligence data. The Department of Defense Critical Technologies Plan lists data fusion in the highest priority group of critical technologies, but just what is data fusion? The DoD Critical Technologies Plan states that data fusion involves 'the acquisition, integration, filtering, correlation, and synthesis of useful data from diverse sources for the purposes of situation/environment assessment, planning, detecting, verifying, diagnosing problems, aiding tactical and strategic decisions, and improving system performance and utility.' More simply states, sensor fusion refers to the combination of data from multiple sources to provide enhanced information quality and availability over that which is available from any individual source alone. This paper presents a survey of the state-of-the- art in data fusion technologies, system components, and applications. A set of characteristics which can be utilized to classify data fusion systems is presented. Additionally, a unifying mathematical and conceptual framework within which to understand and organize fusion technologies is described. A discussion of often overlooked issues in the development of sensor fusion systems is also presented.
Multisource data fusion for documenting archaeological sites
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir; Chibunichev, Alexander; Zhuravlev, Denis
2017-10-01
The quality of archaeological sites documenting is of great importance for cultural heritage preserving and investigating. The progress in developing new techniques and systems for data acquisition and processing creates an excellent basis for achieving a new quality of archaeological sites documenting and visualization. archaeological data has some specific features which have to be taken into account when acquiring, processing and managing. First of all, it is a needed to gather as full as possible information about findings providing no loss of information and no damage to artifacts. Remote sensing technologies are the most adequate and powerful means which satisfy this requirement. An approach to archaeological data acquiring and fusion based on remote sensing is proposed. It combines a set of photogrammetric techniques for obtaining geometrical and visual information at different scales and detailing and a pipeline for archaeological data documenting, structuring, fusion, and analysis. The proposed approach is applied for documenting of Bosporus archaeological expedition of Russian State Historical Museum.
Non-ad-hoc decision rule for the Dempster-Shafer method of evidential reasoning
NASA Astrophysics Data System (ADS)
Cheaito, Ali; Lecours, Michael; Bosse, Eloi
1998-03-01
This paper is concerned with the fusion of identity information through the use of statistical analysis rooted in Dempster-Shafer theory of evidence to provide automatic identification aboard a platform. An identity information process for a baseline Multi-Source Data Fusion (MSDF) system is defined. The MSDF system is applied to information sources which include a number of radars, IFF systems, an ESM system, and a remote track source. We use a comprehensive Platform Data Base (PDB) containing all the possible identity values that the potential target may take, and we use the fuzzy logic strategies which enable the fusion of subjective attribute information from sensor and the PDB to make the derivation of target identity more quickly, more precisely, and with statistically quantifiable measures of confidence. The conventional Dempster-Shafer lacks a formal basis upon which decision can be made in the face of ambiguity. We define a non-ad hoc decision rule based on the expected utility interval for pruning the `unessential' propositions which would otherwise overload the real-time data fusion systems. An example has been selected to demonstrate the implementation of our modified Dempster-Shafer method of evidential reasoning.
Mashup Scheme Design of Map Tiles Using Lightweight Open Source Webgis Platform
NASA Astrophysics Data System (ADS)
Hu, T.; Fan, J.; He, H.; Qin, L.; Li, G.
2018-04-01
To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.
Horizontal Estimation and Information Fusion in Multitarget and Multisensor Environments
1987-09-01
provided needed inspirations. Special thanks are due to Distinguished Professor G . J. Thaler, Professor R . Panholzer, Professor N. F. Schneidewind, and...Guidance McGraw Hill, pp. 338-340, 1964. 31. Battin, R . H., and Levine, G . M., A22lication of Kalman Filtering Techniaues in The Aoollo Program, in Theory...FL.. pp. 171 -175, Dec. 197 1. 43. Singer, R . A., Sea R . G ., and Housewright K. B., Derivation and Evaluation of Imoroved Tracking Filters for Use in
Composable Analytic Systems for next-generation intelligence analysis
NASA Astrophysics Data System (ADS)
DiBona, Phil; Llinas, James; Barry, Kevin
2015-05-01
Lockheed Martin Advanced Technology Laboratories (LM ATL) is collaborating with Professor James Llinas, Ph.D., of the Center for Multisource Information Fusion at the University at Buffalo (State of NY), researching concepts for a mixed-initiative associate system for intelligence analysts to facilitate reduced analysis and decision times while proactively discovering and presenting relevant information based on the analyst's needs, current tasks and cognitive state. Today's exploitation and analysis systems have largely been designed for a specific sensor, data type, and operational context, leading to difficulty in directly supporting the analyst's evolving tasking and work product development preferences across complex Operational Environments. Our interactions with analysts illuminate the need to impact the information fusion, exploitation, and analysis capabilities in a variety of ways, including understanding data options, algorithm composition, hypothesis validation, and work product development. Composable Analytic Systems, an analyst-driven system that increases flexibility and capability to effectively utilize Multi-INT fusion and analytics tailored to the analyst's mission needs, holds promise to addresses the current and future intelligence analysis needs, as US forces engage threats in contested and denied environments.
National Center for Multisource Information Fusion
2009-04-01
discipline. The center has focused its efforts in solving the growing problems of exploiting massive quantities of diverse, and often...development of a comprehensive high level fusion framework that includes the addition of Levels 2, 3 and 4 type tools to the ECCARS...correlate IDS alerts into individual attacks and provide a threat assessment for the network. A comprehensive review of attack graphs was conducted
Multi-sources data fusion framework for remote triage prioritization in telehealth.
Salman, O H; Rasid, M F A; Saripan, M I; Subramaniam, S K
2014-09-01
The healthcare industry is streamlining processes to offer more timely and effective services to all patients. Computerized software algorithm and smart devices can streamline the relation between users and doctors by providing more services inside the healthcare telemonitoring systems. This paper proposes a multi-sources framework to support advanced healthcare applications. The proposed framework named Multi Sources Healthcare Architecture (MSHA) considers multi-sources: sensors (ECG, SpO2 and Blood Pressure) and text-based inputs from wireless and pervasive devices of Wireless Body Area Network. The proposed framework is used to improve the healthcare scalability efficiency by enhancing the remote triaging and remote prioritization processes for the patients. The proposed framework is also used to provide intelligent services over telemonitoring healthcare services systems by using data fusion method and prioritization technique. As telemonitoring system consists of three tiers (Sensors/ sources, Base station and Server), the simulation of the MSHA algorithm in the base station is demonstrated in this paper. The achievement of a high level of accuracy in the prioritization and triaging patients remotely, is set to be our main goal. Meanwhile, the role of multi sources data fusion in the telemonitoring healthcare services systems has been demonstrated. In addition to that, we discuss how the proposed framework can be applied in a healthcare telemonitoring scenario. Simulation results, for different symptoms relate to different emergency levels of heart chronic diseases, demonstrate the superiority of our algorithm compared with conventional algorithms in terms of classify and prioritize the patients remotely.
NASA Astrophysics Data System (ADS)
Huang, W.; Jiang, J.; Zha, Z.; Zhang, H.; Wang, C.; Zhang, J.
2014-04-01
Geospatial data resources are the foundation of the construction of geo portal which is designed to provide online geoinformation services for the government, enterprise and public. It is vital to keep geospatial data fresh, accurate and comprehensive in order to satisfy the requirements of application and development of geographic location, route navigation, geo search and so on. One of the major problems we are facing is data acquisition. For us, integrating multi-sources geospatial data is the mainly means of data acquisition. This paper introduced a practice integration approach of multi-source geospatial data with different data model, structure and format, which provided the construction of National Geospatial Information Service Platform of China (NGISP) with effective technical supports. NGISP is the China's official geo portal which provides online geoinformation services based on internet, e-government network and classified network. Within the NGISP architecture, there are three kinds of nodes: national, provincial and municipal. Therefore, the geospatial data is from these nodes and the different datasets are heterogeneous. According to the results of analysis of the heterogeneous datasets, the first thing we do is to define the basic principles of data fusion, including following aspects: 1. location precision; 2.geometric representation; 3. up-to-date state; 4. attribute values; and 5. spatial relationship. Then the technical procedure is researched and the method that used to process different categories of features such as road, railway, boundary, river, settlement and building is proposed based on the principles. A case study in Jiangsu province demonstrated the applicability of the principle, procedure and method of multi-source geospatial data integration.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-01-01
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone. PMID:26266764
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-08-12
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone.
Multisource passive acoustic tracking: an application of random finite set data fusion
NASA Astrophysics Data System (ADS)
Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung
2010-04-01
Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.
Distributed cluster management techniques for unattended ground sensor networks
NASA Astrophysics Data System (ADS)
Essawy, Magdi A.; Stelzig, Chad A.; Bevington, James E.; Minor, Sharon
2005-05-01
Smart Sensor Networks are becoming important target detection and tracking tools. The challenging problems in such networks include the sensor fusion, data management and communication schemes. This work discusses techniques used to distribute sensor management and multi-target tracking responsibilities across an ad hoc, self-healing cluster of sensor nodes. Although miniaturized computing resources possess the ability to host complex tracking and data fusion algorithms, there still exist inherent bandwidth constraints on the RF channel. Therefore, special attention is placed on the reduction of node-to-node communications within the cluster by minimizing unsolicited messaging, and distributing the sensor fusion and tracking tasks onto local portions of the network. Several challenging problems are addressed in this work including track initialization and conflict resolution, track ownership handling, and communication control optimization. Emphasis is also placed on increasing the overall robustness of the sensor cluster through independent decision capabilities on all sensor nodes. Track initiation is performed using collaborative sensing within a neighborhood of sensor nodes, allowing each node to independently determine if initial track ownership should be assumed. This autonomous track initiation prevents the formation of duplicate tracks while eliminating the need for a central "management" node to assign tracking responsibilities. Track update is performed as an ownership node requests sensor reports from neighboring nodes based on track error covariance and the neighboring nodes geo-positional location. Track ownership is periodically recomputed using propagated track states to determine which sensing node provides the desired coverage characteristics. High fidelity multi-target simulation results are presented, indicating the distribution of sensor management and tracking capabilities to not only reduce communication bandwidth consumption, but to also simplify multi-target tracking within the cluster.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
NASA Astrophysics Data System (ADS)
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco P.; Pasquariello, Guido
2018-03-01
High-resolution, remotely sensed images of the Earth surface have been proven to be of help in producing detailed flood maps, thanks to their synoptic overview of the flooded area and frequent revisits. However, flood scenarios can be complex situations, requiring the integration of different data in order to provide accurate and robust flood information. Several processing approaches have been recently proposed to efficiently combine and integrate heterogeneous information sources. In this paper, we introduce DAFNE, a Matlab®-based, open source toolbox, conceived to produce flood maps from remotely sensed and other ancillary information, through a data fusion approach. DAFNE is based on Bayesian Networks, and is composed of several independent modules, each one performing a different task. Multi-temporal and multi-sensor data can be easily handled, with the possibility of following the evolution of an event through multi-temporal output flood maps. Each DAFNE module can be easily modified or upgraded to meet different user needs. The DAFNE suite is presented together with an example of its application.
Multisource information fusion applied to ship identification for the recognized maritime picture
NASA Astrophysics Data System (ADS)
Simard, Marc-Alain; Lefebvre, Eric; Helleur, Christopher
2000-04-01
The Recognized Maritime Picture (RMP) is defined as a composite picture of activity over a maritime area of interest. In simplistic terms, building an RAMP comes down to finding if an object of interest, a ship in our case, is there or not, determining what it is, determining what it is doing and determining if some type of follow-on action is required. The Canadian Department of National Defence currently has access to or may, in the near future, have access to a number of civilians, military and allied information or sensor systems to accomplish these purposes. These systems include automatic self-reporting positional systems, air patrol surveillance systems, high frequency surface radars, electronic intelligence systems, radar space systems and high frequency direction finding sensors. The ability to make full use of these systems is limited by the existing capability to fuse data from all sources in a timely, accurate and complete manner. This paper presents an information fusion systems under development that correlates and fuses these information and sensor data sources. This fusion system, named Adaptive Fuzzy Logic Correlator, correlates the information in batch but fuses and constructs ship tracks sequentially. It applies standard Kalman filter techniques and fuzzy logic correlation techniques. We propose a set of recommendations that should improve the ship identification process. Particularly it is proposed to utilize as many non-redundant sources of information as possible that address specific vessel attributes. Another important recommendation states that the information fusion and data association techniques should be capable of dealing with incomplete and imprecise information. Some fuzzy logic techniques capable of tolerating imprecise and dissimilar data are proposed.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Efficient Multi-Source Data Fusion for Decentralized Sensor Networks
2006-10-01
Operating Picture (COP). Robovolc, accessing a single DDF node associated with a CCTV camera (marked in orange in Figure 3a), defends a ‘ sensitive ...Gaussian environments. Figure 10: Particle Distribution Snapshots osition error between each target and the me ed particle set at the bearing-only
Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data
NASA Astrophysics Data System (ADS)
Han, X.; Wu, J.
2018-04-01
The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.
Joint sparsity based heterogeneous data-level fusion for target detection and estimation
NASA Astrophysics Data System (ADS)
Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe
2017-05-01
Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.
Multisource feedback to graduate nurses: a multimethod study.
McPhee, Samantha; Phillips, Nicole M; Ockerby, Cherene; Hutchinson, Alison M
2017-11-01
(1) To explore graduate nurses' perceptions of the influence of multisource feedback on their performance and (2) to explore perceptions of Clinical Nurse Educators involved in providing feedback regarding feasibility and benefit of the approach. Graduate registered nurses are expected to provide high-quality care for patients in demanding and unpredictable clinical environments. Receiving feedback is essential to their development. Performance appraisals are a common method used to provide feedback and typically involve a single source of feedback. Alternatively, multisource feedback allows the learner to gain insight into performance from a variety of perspectives. This study explores multisource feedback in an Australian setting within the graduate nurse context. Multimethod study. Eleven graduates were given structured performance feedback from four raters: Nurse Unit Manager, Clinical Nurse Educator, preceptor and a self-appraisal. Thirteen graduates received standard single-rater appraisals. Data regarding perceptions of feedback for both groups were obtained using a questionnaire. Semistructured interviews were conducted with nurses who received multisource feedback and the educators. In total, 94% (n = 15) of survey respondents perceived feedback was important during the graduate year. Four themes emerged from interviews: informal feedback, appropriateness of raters, elements of delivery and creating an appraisal process that is 'more real'. Multisource feedback was perceived as more beneficial compared to single-rater feedback. Educators saw value in multisource feedback; however, perceived barriers were engaging raters and collating feedback. Some evidence exists to indicate that feedback from multiple sources is valued by graduates. Further research in a larger sample and with more experienced nurses is required. Evidence resulting from this study indicates that multisource feedback is valued by both graduates and educators and informs graduates' development and transition into the role. Thus, a multisource approach to feedback for graduate nurses should be considered. © 2016 John Wiley & Sons Ltd.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na
2014-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935
Multi Sensor Fusion Using Fitness Adaptive Differential Evolution
NASA Astrophysics Data System (ADS)
Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam
The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).
Hill, Jacqueline J; Asprey, Anthea; Richards, Suzanne H; Campbell, John L
2012-01-01
Background UK revalidation plans for doctors include obtaining multisource feedback from patient and colleague questionnaires as part of the supporting information for appraisal and revalidation. Aim To investigate GPs' and appraisers' views of using multisource feedback data in appraisal, and of the emerging links between multisource feedback, appraisal, and revalidation. Design and setting A qualitative study in UK general practice. Method In total, 12 GPs who had recently completed the General Medical Council multisource feedback questionnaires and 12 appraisers undertook a semi-structured, telephone interview. A thematic analysis was performed. Results Participants supported multisource feedback for formative development, although most expressed concerns about some elements of its methodology (for example, ‘self’ selection of colleagues, or whether patients and colleagues can provide objective feedback). Some participants reported difficulties in understanding benchmark data and some were upset by their scores. Most accepted the links between appraisal and revalidation, and that multisource feedback could make a positive contribution. However, tensions between the formative processes of appraisal and the summative function of revalidation were identified. Conclusion Participants valued multisource feedback as part of formative assessment and saw a role for it in appraisal. However, concerns about some elements of multisource feedback methodology may undermine its credibility as a tool for identifying poor performance. Proposals linking multisource feedback, appraisal, and revalidation may limit the use of multisource feedback and appraisal for learning and development by some doctors. Careful consideration is required with respect to promoting the accuracy and credibility of such feedback processes so that their use for learning and development, and for revalidation, is maximised. PMID:22546590
Hill, Jacqueline J; Asprey, Anthea; Richards, Suzanne H; Campbell, John L
2012-05-01
UK revalidation plans for doctors include obtaining multisource feedback from patient and colleague questionnaires as part of the supporting information for appraisal and revalidation. To investigate GPs' and appraisers' views of using multisource feedback data in appraisal, and of the emerging links between multisource feedback, appraisal, and revalidation. A qualitative study in UK general practice. In total, 12 GPs who had recently completed the General Medical Council multisource feedback questionnaires and 12 appraisers undertook a semi-structured, telephone interview. A thematic analysis was performed. Participants supported multisource feedback for formative development, although most expressed concerns about some elements of its methodology (for example, 'self' selection of colleagues, or whether patients and colleagues can provide objective feedback). Some participants reported difficulties in understanding benchmark data and some were upset by their scores. Most accepted the links between appraisal and revalidation, and that multisource feedback could make a positive contribution. However, tensions between the formative processes of appraisal and the summative function of revalidation were identified. Participants valued multisource feedback as part of formative assessment and saw a role for it in appraisal. However, concerns about some elements of multisource feedback methodology may undermine its credibility as a tool for identifying poor performance. Proposals linking multisource feedback, appraisal, and revalidation may limit the use of multisource feedback and appraisal for learning and development by some doctors. Careful consideration is required with respect to promoting the accuracy and credibility of such feedback processes so that their use for learning and development, and for revalidation, is maximised.
Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.
Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue
2018-05-25
A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.
A multichannel decision-level fusion method for T wave alternans detection
NASA Astrophysics Data System (ADS)
Ye, Changrong; Zeng, Xiaoping; Li, Guojun; Shi, Chenyuan; Jian, Xin; Zhou, Xichuan
2017-09-01
Sudden cardiac death (SCD) is one of the most prominent causes of death among patients with cardiac diseases. Since ventricular arrhythmia is the main cause of SCD and it can be predicted by T wave alternans (TWA), the detection of TWA in the body-surface electrocardiograph (ECG) plays an important role in the prevention of SCD. But due to the multi-source nature of TWA, the nonlinear propagation through thorax, and the effects of the strong noises, the information from different channels is uncertain and competitive with each other. As a result, the single-channel decision is one-sided while the multichannel decision is difficult to reach a consensus on. In this paper, a novel multichannel decision-level fusion method based on the Dezert-Smarandache Theory is proposed to address this issue. Due to the redistribution mechanism for highly competitive information, higher detection accuracy and robustness are achieved. It also shows promise to low-cost instruments and portable applications by reducing demands for the synchronous sampling. Experiments on the real records from the Physikalisch-Technische Bundesanstalt diagnostic ECG database indicate that the performance of the proposed method improves by 12%-20% compared with the one-dimensional decision method based on the periodic component analysis.
NASA Astrophysics Data System (ADS)
Ma, Y.; Liu, S.
2017-12-01
Accurate estimation of surface evapotranspiration (ET) with high quality is one of the biggest obstacles for routine applications of remote sensing in eco-hydrological studies and water resource management at basin scale. However, many aspects urgently need to deeply research, such as the applicability of the ET models, the parameterization schemes optimization at the regional scale, the temporal upscaling, the selecting and developing of the spatiotemporal data fusion method and ground-based validation over heterogeneous land surfaces. This project is based on the theoretically robust surface energy balance system (SEBS) model, which the model mechanism need further investigation, including the applicability and the influencing factors, such as local environment, and heterogeneity of the landscape, for improving estimation accuracy. Due to technical and budget limitations, so far, optical remote sensing data is missing due to frequent cloud contamination and other poor atmospheric conditions in Southwest China. Here, a multi-source remote sensing data fusion method (ESTARFM: Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model) method will be proposed through blending multi-source remote sensing data acquired by optical, and passive microwave remote sensors on board polar satellite platforms. The accurate "all-weather" ET estimation will be carried out for daily ET of the River Source Region in Southwest China, and then the remotely sensed ET results are overlapped with the footprint-weighted images of EC (eddy correlation) for ground-based validation.
Drosophila Cancer Models Identify Functional Differences between Ret Fusions.
Levinson, Sarah; Cagan, Ross L
2016-09-13
We generated and compared Drosophila models of RET fusions CCDC6-RET and NCOA4-RET. Both RET fusions directed cells to migrate, delaminate, and undergo EMT, and both resulted in lethality when broadly expressed. In all phenotypes examined, NCOA4-RET was more severe than CCDC6-RET, mirroring their effects on patients. A functional screen against the Drosophila kinome and a library of cancer drugs found that CCDC6-RET and NCOA4-RET acted through different signaling networks and displayed distinct drug sensitivities. Combining data from the kinome and drug screens identified the WEE1 inhibitor AZD1775 plus the multi-kinase inhibitor sorafenib as a synergistic drug combination that is specific for NCOA4-RET. Our work emphasizes the importance of identifying and tailoring a patient's treatment to their specific RET fusion isoform and identifies a multi-targeted therapy that may prove effective against tumors containing the NCOA4-RET fusion. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Data-to-Decisions S&T Priority Initiative
2011-11-08
Context Mapping − Track Performance Model Multi-Source Tracking − Track Fusion − Track through Gaps − Move-Stop-Move Performance Based ...Decisions S&T Priority Initiative Dr. Carey Schwartz PSC Lead Office of Naval Research NDIA Disruptive Technologies Conference November 8-9, 2011...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Office of Naval Research ,875 North Randolph Street , Arlington,VA,2217 8. PERFORMING ORGANIZATION REPORT
Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking
Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng
2017-01-01
Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243
Using soft-hard fusion for misinformation detection and pattern of life analysis in OSINT
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Shabarekh, Charlotte
2017-05-01
Today's battlefields are shifting to "denied areas", where the use of U.S. Military air and ground assets is limited. To succeed, the U.S. intelligence analysts increasingly rely on available open-source intelligence (OSINT) which is fraught with inconsistencies, biased reporting and fake news. Analysts need automated tools for retrieval of information from OSINT sources, and these solutions must identify and resolve conflicting and deceptive information. In this paper, we present a misinformation detection model (MDM) which converts text to attributed knowledge graphs and runs graph-based analytics to identify misinformation. At the core of our solution is identification of knowledge conflicts in the fused multi-source knowledge graph, and semi-supervised learning to compute locally consistent reliability and credibility scores for the documents and sources, respectively. We present validation of proposed method using an open source dataset constructed from the online investigations of MH17 downing in Eastern Ukraine.
Fusion of radar and satellite target measurements
NASA Astrophysics Data System (ADS)
Moy, Gabriel; Blaty, Donald; Farber, Morton; Nealy, Carlton
2011-06-01
A potentially high payoff for the ballistic missile defense system (BMDS) is the ability to fuse the information gathered by various sensor systems. In particular, it may be valuable in the future to fuse measurements made using ground based radars with passive measurements obtained from satellite-based EO/IR sensors. This task can be challenging in a multitarget environment in view of the widely differing resolution between active ground-based radar and an observation made by a sensor at long range from a satellite platform. Additionally, each sensor system could have a residual pointing bias which has not been calibrated out. The problem is further compounded by the possibility that an EO/IR sensor may not see exactly the same set of targets as a microwave radar. In order to better understand the problems involved in performing the fusion of metric information from EO/IR satellite measurements with active microwave radar measurements, we have undertaken a study of this data fusion issue and of the associated data processing techniques. To carry out this analysis, we have made use of high fidelity simulations to model the radar observations from a missile target and the observations of the same simulated target, as gathered by a constellation of satellites. In the paper, we discuss the improvements seen in our tests when fusing the state vectors, along with the improvements in sensor bias estimation. The limitations in performance due to the differing phenomenology between IR and microwave radar are discussed as well.
[Real-time detection of quality of Chinese materia medica: strategy of NIR model evaluation].
Wu, Zhi-sheng; Shi, Xin-yuan; Xu, Bing; Dai, Xing-xing; Qiao, Yan-jiang
2015-07-01
The definition of critical quality attributes of Chinese materia medica ( CMM) was put forward based on the top-level design concept. Nowadays, coupled with the development of rapid analytical science, rapid assessment of critical quality attributes of CMM was firstly carried out, which was the secondary discipline branch of CMM. Taking near infrared (NIR) spectroscopy as an example, which is a rapid analytical technology in pharmaceutical process over the past decade, systematic review is the chemometric parameters in NIR model evaluation. According to the characteristics of complexity of CMM and trace components analysis, a multi-source information fusion strategy of NIR model was developed for assessment of critical quality attributes of CMM. The strategy has provided guideline for NIR reliable analysis in critical quality attributes of CMM.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
Cervo, Silvia; Rovina, Jane; Talamini, Renato; Perin, Tiziana; Canzonieri, Vincenzo; De Paoli, Paolo; Steffan, Agostino
2013-07-30
Efforts to improve patients' understanding of their own medical treatments or research in which they are involved are progressing, especially with regard to informed consent procedures. We aimed to design a multisource informed consent procedure that is easily adaptable to both clinical and research applications, and to evaluate its effectiveness in terms of understanding and awareness, even in less educated patients. We designed a multisource informed consent procedure for patients' enrolment in a Cancer Institute Biobank (CRO-Biobank). From October 2009 to July 2011, a total of 550 cancer patients admitted to the Centro di Riferimento Oncologico IRCCS Aviano, who agreed to contribute to its biobank, were consecutively enrolled. Participants were asked to answer a self-administered questionnaire aim at exploring their understanding of biobanks and their needs for information on this topic, before and after study participation. Chi-square tests were performed on the questionnaire answers, according to gender or education. Of the 430 patients who returned the questionnaire, only 36.5% knew what a biobank was before participating in the study. Patients with less formal education were less informed by some sources (the Internet, newspapers, magazines, and our Institute). The final assessment test, taken after the multisource informed consent procedure, showed more than 95% correct answers. The information received was judged to be very or fairly understandable in almost all cases. More than 95% of patients were aware of participating in a biobank project, and gave helping cancer research (67.5%), moral obligation, and supporting cancer care as main reasons for their involvement. Our multisource informed consent information system allowed a high rate of understanding and awareness of study participation, even among less-educated participants, and could be an effective and easy-to-apply model for others to consider to contribute to a well-informed decision making process in several fields, from clinical practice to research.Further studies are needed to explore the effects on the study comprehension by each source of information, and by other sources suggested by participants in the questionnaire.
Multimethod-Multisource Approach for Assessing High-Technology Training Systems.
ERIC Educational Resources Information Center
Shlechter, Theodore M.; And Others
This investigation examined the value of using a multimethod-multisource approach to assess high-technology training systems. The research strategy was utilized to provide empirical information on the instructional effectiveness of the Reserve Component Virtual Training Program (RCVTP), which was developed to improve the training of Army National…
Multi-Source Fusion for Explosive Hazard Detection in Forward Looking Sensors
2016-12-01
include; (1) Investigating (a) thermal, (b) synthetic aperture acoustics ( SAA ) and (c) voxel space Radar for buried and side threat attacks. (2...detection. (3) With respect to SAA , we developed new approaches in the time and frequency domains for analyzing signature of concealed targets (called...Fraz). We also developed a method to extract a multi-spectral signature from SAA and deep learning was used on limited training and class imbalance
NASA Astrophysics Data System (ADS)
Li, Deying; Yin, Kunlong; Gao, Huaxi; Liu, Changchun
2009-10-01
Although the project of the Three Gorges Dam across the Yangtze River in China can utilize this huge potential source of hydroelectric power, and eliminate the loss of life and damage by flood, it also causes environmental problems due to the big rise and fluctuation of the water, such as geo-hazards. In order to prevent and predict geo-hazards, the establishment of prediction system of geo-hazards is very necessary. In order to implement functions of hazard prediction of regional and urban geo-hazard, single geo-hazard prediction, prediction of landslide surge and risk evaluation, logical layers of the system consist of data capturing layer, data manipulation and processing layer, analysis and application layer, and information publication layer. Due to the existence of multi-source spatial data, the research on the multi-source transformation and fusion data should be carried on in the paper. Its applicability of the system was testified on the spatial prediction of landslide hazard through spatial analysis of GIS in which information value method have been applied aims to identify susceptible areas that are possible to future landslide, on the basis of historical record of past landslide, terrain parameter, geology, rainfall and anthropogenic activity. Detailed discussion was carried out on spatial distribution characteristics of landslide hazard in the new town of Badong. These results can be used for risk evaluation. The system can be implemented as an early-warning and emergency management tool by the relevant authorities of the Three Gorges Reservoir in the future.
Multi-Source Evaluation of Interpersonal and Communication Skills of Family Medicine Residents
ERIC Educational Resources Information Center
Leung, Kai-Kuen; Wang, Wei-Dan; Chen, Yen-Yuan
2012-01-01
There is a lack of information on the use of multi-source evaluation to assess trainees' interpersonal and communication skills in Oriental settings. This study is conducted to assess the reliability and applicability of assessing the interpersonal and communication skills of family medicine residents by patients, peer residents, nurses, and…
Evidence Combination From an Evolutionary Game Theory Perspective.
Deng, Xinyang; Han, Deqiang; Dezert, Jean; Deng, Yong; Shyr, Yu
2016-09-01
Dempster-Shafer evidence theory is a primary methodology for multisource information fusion because it is good at dealing with uncertain information. This theory provides a Dempster's rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multievidence system. Within the proposed ECR, we develop a Jaccard matrix game to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution's stability and convergence, have been mathematically proved as well.
Dang, Yaoguo; Mao, Wenxin
2018-01-01
In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method. PMID:29510521
Sun, Huifang; Dang, Yaoguo; Mao, Wenxin
2018-03-03
In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
Variable cycle control model for intersection based on multi-source information
NASA Astrophysics Data System (ADS)
Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan
2018-05-01
In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.
The Finnish multisource national forest inventory: small-area estimation and map production
Erkki Tomppo
2009-01-01
A driving force motivating development of the multisource national forest inventory (MS-NFI) in connection with the Finnish national forest inventory (NFI) was the desire to obtain forest resource information for smaller areas than is possible using field data only without significantly increasing the cost of the inventory. A basic requirement for the method was that...
Collaborative filtering on a family of biological targets.
Erhan, Dumitru; L'heureux, Pierre-Jean; Yue, Shi Yi; Bengio, Yoshua
2006-01-01
Building a QSAR model of a new biological target for which few screening data are available is a statistical challenge. However, the new target may be part of a bigger family, for which we have more screening data. Collaborative filtering or, more generally, multi-task learning, is a machine learning approach that improves the generalization performance of an algorithm by using information from related tasks as an inductive bias. We use collaborative filtering techniques for building predictive models that link multiple targets to multiple examples. The more commonalities between the targets, the better the multi-target model that can be built. We show an example of a multi-target neural network that can use family information to produce a predictive model of an undersampled target. We evaluate JRank, a kernel-based method designed for collaborative filtering. We show their performance on compound prioritization for an HTS campaign and the underlying shared representation between targets. JRank outperformed the neural network both in the single- and multi-target models.
NASA Astrophysics Data System (ADS)
Chang, N. B.; Yang, Y. J.; Daranpob, A.
2009-09-01
Recent extreme hydroclimatic events in the United States alone include, but are not limited to, the droughts in Maryland and the Chesapeake Bay area in 2001 through September 2002; Lake Mead in Las Vegas in 2000 through 2004; the Peace River and Lake Okeechobee in South Florida in 2006; and Lake Lanier in Atlanta, Georgia in 2007 that affected the water resources distribution in three states - Alabama, Florida and Georgia. This paper provides evidence from previous work and elaborates on the future perspectives that will collectively employ remote sensing and in-situ observations to support the implementation of the water availability assessment in a metropolitan region. Within the hydrological cycle, precipitation, soil moisture, and evapotranspiration can be monitored by using WSR-88D/NEXRAD data, RADARSAT-1 images, and GEOS images collectively to address the spatiotemporal variations of quantitative availability of waters whereas the MODIS images may be used to track down the qualitative availability of waters in terms of turbidity, Chlorophyll-a and other constitutes of concern. Tampa Bay in Florida was selected as a study site in this analysis, where the water supply infrastructure covers groundwater, desalination plant, and surface water at the same time. Research findings show that through the proper fusion of multi-source and multi-scale remote sensing data for water availability assessment in metropolitan region, a new insight of water infrastructure assessment can be gained to support sustainable planning region wide.
Evidence Combination From an Evolutionary Game Theory Perspective
Deng, Xinyang; Han, Deqiang; Dezert, Jean; Deng, Yong; Shyr, Yu
2017-01-01
Dempster-Shafer evidence theory is a primary methodology for multi-source information fusion because it is good at dealing with uncertain information. This theory provides a Dempster’s rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multi-evidence system. Within the proposed ECR, we develop a Jaccard matrix game (JMG) to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution’s stability and convergence, have been mathematically proved as well. PMID:26285231
Scaling dimensions in spectroscopy of soil and vegetation
NASA Astrophysics Data System (ADS)
Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.
2007-05-01
The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.; White, Amanda M.; Whitney, Paul D.
2013-06-04
The Multi-Source Signatures for Nuclear Programs project, part of Pacific Northwest National Laboratory’s (PNNL) Signature Discovery Initiative, seeks to computationally capture expert assessment of multi-type information such as text, sensor output, imagery, or audio/video files, to assess nuclear activities through a series of Bayesian network (BN) models. These models incorporate knowledge from a diverse range of information sources in order to help assess a country’s nuclear activities. The models span engineering topic areas, state-level indicators, and facility-specific characteristics. To illustrate the development, calibration, and use of BN models for multi-source assessment, we present a model that predicts a country’s likelihoodmore » to participate in the international nuclear nonproliferation regime. We validate this model by examining the extent to which the model assists non-experts arrive at conclusions similar to those provided by nuclear proliferation experts. We also describe the PNNL-developed software used throughout the lifecycle of the Bayesian network model development.« less
Multi-Source Sensor Fusion for Small Unmanned Aircraft Systems Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Cook, Brandon; Cohen, Kelly
2017-01-01
As the applications for using small Unmanned Aircraft Systems (sUAS) beyond visual line of sight (BVLOS) continue to grow in the coming years, it is imperative that intelligent sensor fusion techniques be explored. In BVLOS scenarios the vehicle position must accurately be tracked over time to ensure no two vehicles collide with one another, no vehicle crashes into surrounding structures, and to identify off-nominal scenarios. Therefore, in this study an intelligent systems approach is used to estimate the position of sUAS given a variety of sensor platforms, including, GPS, radar, and on-board detection hardware. Common research challenges include, asynchronous sensor rates and sensor reliability. In an effort to realize these challenges, techniques such as a Maximum a Posteriori estimation and a Fuzzy Logic based sensor confidence determination are used.
Phase equilibria constraints on models of subduction zone magmatism
NASA Astrophysics Data System (ADS)
Myers, James D.; Johnston, Dana A.
Petrologic models of subduction zone magmatism can be grouped into three broad classes: (1) predominantly slab-derived, (2) mainly mantle-derived, and (3) multi-source. Slab-derived models assume high-alumina basalt (HAB) approximates primary magma and is derived by partial fusion of the subducting slab. Such melts must, therefore, be saturated with some combination of eclogite phases, e.g. cpx, garnet, qtz, at the pressures, temperatures and water contents of magma generation. In contrast, mantle-dominated models suggest partial melting of the mantle wedge produces primary high-magnesia basalts (HMB) which fractionate to yield derivative HAB magmas. In this context, HMB melts should be saturated with a combination of peridotite phases, i.e. ol, cpx and opx, and have liquid-lines-of-descent that produce high-alumina basalts. HAB generated in this manner must be saturated with a mafic phase assemblage at the intensive conditions of fractionation. Multi-source models combine slab and mantle components in varying proportions to generate the four main lava types (HMB, HAB, high-magnesia andesites (HMA) and evolved lavas) characteristic of subduction zones. The mechanism of mass transfer from slab to wedge as well as the nature and fate of primary magmas vary considerably among these models. Because of their complexity, these models imply a wide range of phase equilibria. Although the experiments conducted on calc-alkaline lavas are limited, they place the following limitations on arc petrologic models: (1) HAB cannot be derived from HMB by crystal fractionation at the intensive conditions thus far investigated, (2) HAB could be produced by anhydrous partial fusion of eclogite at high pressure, (3) HMB liquids can be produced by peridotite partial fusion 50-60 km above the slab-mantle interface, (4) HMA cannot be primary magmas derived by partial melting of the subducted slab, but could have formed by slab melt-peridotite interaction, and (5) many evolved calc-alkaline lavas could have been formed by crystal fractionation at a range of crustal pressures.
Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2013-01-01
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
Towards Device-Independent Information Processing on General Quantum Networks
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Hoban, Matty J.
2018-01-01
The violation of certain Bell inequalities allows for device-independent information processing secure against nonsignaling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraints that entangled systems can only be transmitted over relatively short distances, large-scale multisource networks have been employed. Do there exist analogs of Bell inequalities for such networks, whose violation is a resource for device independence? In this Letter, the violation of recently derived polynomial Bell inequalities will be shown to allow for device independence on multisource networks, secure against nonsignaling eavesdroppers.
Towards Device-Independent Information Processing on General Quantum Networks.
Lee, Ciarán M; Hoban, Matty J
2018-01-12
The violation of certain Bell inequalities allows for device-independent information processing secure against nonsignaling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraints that entangled systems can only be transmitted over relatively short distances, large-scale multisource networks have been employed. Do there exist analogs of Bell inequalities for such networks, whose violation is a resource for device independence? In this Letter, the violation of recently derived polynomial Bell inequalities will be shown to allow for device independence on multisource networks, secure against nonsignaling eavesdroppers.
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-01-01
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-12-25
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Field Trials of the Multi-Source Approach for Resistivity and Induced Polarization Data Acquisition
NASA Astrophysics Data System (ADS)
LaBrecque, D. J.; Morelli, G.; Fischanger, F.; Lamoureux, P.; Brigham, R.
2013-12-01
Implementing systems of distributed receivers and transmitters for resistivity and induced polarization data is an almost inevitable result of the availability of wireless data communication modules and GPS modules offering precise timing and instrument locations. Such systems have a number of advantages; for example, they can be deployed around obstacles such as rivers, canyons, or mountains which would be difficult with traditional 'hard-wired' systems. However, deploying a system of identical, small, battery powered, transceivers, each capable of injecting a known current and measuring the induced potential has an additional and less obvious advantage in that multiple units can inject current simultaneously. The original purpose for using multiple simultaneous current sources (multi-source) was to increase signal levels. In traditional systems, to double the received signal you inject twice the current which requires you to apply twice the voltage and thus four times the power. Alternatively, one approach to increasing signal levels for large-scale surveys collected using small, battery powered transceivers is it to allow multiple units to transmit in parallel. In theory, using four 400 watt transmitters on separate, parallel dipoles yields roughly the same signal as a single 6400 watt transmitter. Furthermore, implementing the multi-source approach creates the opportunity to apply more complex current flow patterns than simple, parallel dipoles. For a perfect, noise-free system, multi-sources adds no new information to a data set that contains a comprehensive set of data collected using single sources. However, for realistic, noisy systems, it appears that multi-source data can substantially impact survey results. In preliminary model studies, the multi-source data produced such startling improvements in subsurface images that even the authors questioned their veracity. Between December of 2012 and July of 2013, we completed multi-source surveys at five sites with depths of exploration ranging from 150 to 450 m. The sites included shallow geothermal sites near Reno Nevada, Pomarance Italy, and Volterra Italy; a mineral exploration site near Timmins Quebec; and a landslide investigation near Vajont Dam in northern Italy. These sites provided a series of challenges in survey design and deployment including some extremely difficult terrain and a broad range of background resistivity and induced values. Despite these challenges, comparison of multi-source results to resistivity and induced polarization data collection with more traditional methods support the thesis that the multi-source approach is capable of providing substantial improvements in both depth of penetration and resolution over conventional approaches.
NASA Technical Reports Server (NTRS)
Kim, H.; Swain, P. H.
1991-01-01
A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.
The role of multi-target policy instruments in agri-environmental policy mixes.
Schader, Christian; Lampkin, Nicholas; Muller, Adrian; Stolze, Matthias
2014-12-01
The Tinbergen Rule has been used to criticise multi-target policy instruments for being inefficient. The aim of this paper is to clarify the role of multi-target policy instruments using the case of agri-environmental policy. Employing an analytical linear optimisation model, this paper demonstrates that there is no general contradiction between multi-target policy instruments and the Tinbergen Rule, if multi-target policy instruments are embedded in a policy-mix with a sufficient number of targeted instruments. We show that the relation between cost-effectiveness of the instruments, related to all policy targets, is the key determinant for an economically sound choice of policy instruments. If economies of scope with respect to achieving policy targets are realised, a higher cost-effectiveness of multi-target policy instruments can be achieved. Using the example of organic farming support policy, we discuss several reasons why economies of scope could be realised by multi-target agri-environmental policy instruments. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multi-Target State Extraction for the SMC-PHD Filter
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-01-01
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274
35-GHz radar sensor for automotive collision avoidance
NASA Astrophysics Data System (ADS)
Zhang, Jun
1999-07-01
This paper describes the development of a radar sensor system used for automotive collision avoidance. Because the heavy truck may have great larger radar cross section than a motorcyclist has, the radar receiver may have a large dynamic range. And multi-targets at different speed may confuse the echo spectrum causing the ambiguity between range and speed of target. To get more information about target and background and to adapt to the large dynamic range and multi-targets, a frequency modulated and pseudo- random binary sequences phase modulated continuous wave radar system is described. The analysis of this double- modulation system is given. A high-speed signal processing and data processing component are used to process and combine the data and information from echo at different direction and at every moment.
Li, Ying Hong; Wang, Pan Pan; Li, Xiao Xu; Yu, Chun Yan; Yang, Hong; Zhou, Jin; Xue, Wei Wei; Tan, Jun; Zhu, Feng
2016-01-01
The human kinome is one of the most productive classes of drug target, and there is emerging necessity for treating complex diseases by means of polypharmacology (multi-target drugs and combination products). However, the advantages of the multi-target drugs and the combination products are still under debate. A comparative analysis between FDA approved multi-target drugs and combination products, targeting the human kinome, was conducted by mapping targets onto the phylogenetic tree of the human kinome. The approach of network medicine illustrating the drug-target interactions was applied to identify popular targets of multi-target drugs and combination products. As identified, the multi-target drugs tended to inhibit target pairs in the human kinome, especially the receptor tyrosine kinase family, while the combination products were able to against targets of distant homology relationship. This finding asked for choosing the combination products as a better solution for designing drugs aiming at targets of distant homology relationship. Moreover, sub-networks of drug-target interactions in specific disease were generated, and mechanisms shared by multi-target drugs and combination products were identified. In conclusion, this study performed an analysis between approved multi-target drugs and combination products against the human kinome, which could assist the discovery of next generation polypharmacology.
Multisource Data Integration in Remote Sensing
NASA Technical Reports Server (NTRS)
Tilton, James C. (Editor)
1991-01-01
Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system.
Predicting Drug-Target Interactions With Multi-Information Fusion.
Peng, Lihong; Liao, Bo; Zhu, Wen; Li, Zejun; Li, Keqin
2017-03-01
Identifying potential associations between drugs and targets is a critical prerequisite for modern drug discovery and repurposing. However, predicting these associations is difficult because of the limitations of existing computational methods. Most models only consider chemical structures and protein sequences, and other models are oversimplified. Moreover, datasets used for analysis contain only true-positive interactions, and experimentally validated negative samples are unavailable. To overcome these limitations, we developed a semi-supervised based learning framework called NormMulInf through collaborative filtering theory by using labeled and unlabeled interaction information. The proposed method initially determines similarity measures, such as similarities among samples and local correlations among the labels of the samples, by integrating biological information. The similarity information is then integrated into a robust principal component analysis model, which is solved using augmented Lagrange multipliers. Experimental results on four classes of drug-target interaction networks suggest that the proposed approach can accurately classify and predict drug-target interactions. Part of the predicted interactions are reported in public databases. The proposed method can also predict possible targets for new drugs and can be used to determine whether atropine may interact with alpha1B- and beta1- adrenergic receptors. Furthermore, the developed technique identifies potential drugs for new targets and can be used to assess whether olanzapine and propiomazine may target 5HT2B. Finally, the proposed method can potentially address limitations on studies of multitarget drugs and multidrug targets.
Malli, Theodora; Buxhofer-Ausch, Veronika; Rammer, Melanie; Erdel, Martin; Kranewitter, Wolfgang; Rumpold, Holger; Marschon, Renate; Deutschbauer, Sabine; Simonitsch-Klupp, Ingrid; Valent, Peter; Muellner-Ammer, Kirsten; Sebesta, Christian; Birkner, Thomas; Webersinke, Gerald
2016-01-01
Myeloid and lymphoid neoplasms with fibroblast growth factor receptor 1 (FGFR1) abnormalities, also known as 8p11 myeloproliferative syndrome (EMS), represent rare and aggressive disorders, associated with chromosomal aberrations that lead to the fusion of FGFR1 to different partner genes. We report on a third patient with a fusion of the translocated promoter region (TPR) gene, a component of the nuclear pore complex, to FGFR1 due to a novel ins(1;8)(q25;p11p23). The fact that this fusion is a rare but recurrent event in EMS prompted us to examine the localization and transforming potential of the chimeric protein. TPR-FGFR1 localizes in the cytoplasm, although the nuclear pore localization signal of TPR is retained in the fusion protein. Furthermore, TPR-FGFR1 enables cytokine-independent survival, proliferation, and granulocytic differentiation of the interleukin-3 dependent myeloid progenitor cell line 32Dcl3, reflecting the chronic phase of EMS characterized by myeloid hyperplasia. 32Dcl3 cells transformed with the TPR-FGFR1 fusion and treated with increasing concentrations of the tyrosine kinase inhibitors ponatinib (AP24534) and infigratinib (NVP-BGJ398) displayed reduced survival and proliferation with IC50 values of 49.8 and 7.7 nM, respectively. Ponatinib, a multitargeted tyrosine kinase inhibitor, is already shown to be effective against several FGFR1-fusion kinases. Infigratinib, tested only against FGFR1OP2-FGFR1 to date, is also efficient against TPR-FGFR1. Taking its high specificity for FGFRs into account, infigratinib could be beneficial for EMS patients and should be further investigated for the treatment of myeloproliferative neoplasms with FGFR1 abnormalities. © 2015 Wiley Periodicals, Inc.
Zhu, You-Cai; Zhou, Yue-Fen; Wang, Wen-Xian; Xu, Chun-Wei; Zhuang, Wu; Du, Kai-Qi; Chen, Gang
2018-05-01
ROS1 rearrangement is a validated therapeutic driver gene in non-small cell lung cancer (NSCLC) and represents a small subset (1-2%) of NSCLC. A total of 17 different fusion partner genes of ROS1 in NSCLC have been reported. The multi-targeted MET/ALK/ROS1 tyrosine kinase inhibitor (TKI) crizotinib has demonstrated remarkable efficacy in ROS1-rearranged NSCLC. Consequently, ROS1 detection assays include fluorescence in situ hybridization, immunohistochemistry, and real-time PCR. Next-generation sequencing (NGS) assay covers a range of fusion genes and approaches to discover novel receptor-kinase rearrangements in lung cancer. A 63-year-old male smoker with stage IV NSCLC (TxNxM1) was detected with a novel ROS1 fusion. Histological examination of the tumor showed lung adenocarcinoma. NGS analysis of the hydrothorax cellblocks revealed a novel CEP72-ROS1 rearrangement. This novel CEP72-ROS1 fusion variant is generated by the fusion of exons 1-11 of CEP72 on chromosome 5p15 to exons 23-43 of ROS1 on chromosome 6q22. The predicted CEP72-ROS1 protein product contains 1202 amino acids comprising the N-terminal amino acids 594-647 of CEP72 and C-terminal amino acid 1-1148 of ROS1. CEP72-ROS1 is a novel ROS1 fusion variant in NSCLC discovered by NGS and could be included in ROS1 detection assay, such as reverse transcription PCR. Pleural effusion samples show good diagnostic performance in clinical practice. © 2018 The Authors. Thoracic Cancer published by China Lung Oncology Group and John Wiley & Sons Australia, Ltd.
Supervised classification of aerial imagery and multi-source data fusion for flood assessment
NASA Astrophysics Data System (ADS)
Sava, E.; Harding, L.; Cervone, G.
2015-12-01
Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.
Classification Accuracy Increase Using Multisensor Data Fusion
NASA Astrophysics Data System (ADS)
Makarau, A.; Palubinskas, G.; Reinartz, P.
2011-09-01
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.
Measurement level AIS/radar fusion for maritime surveillance
NASA Astrophysics Data System (ADS)
Habtemariam, Biruk K.; Tharmarasa, R.; Meger, Eric; Kirubarajan, T.
2012-05-01
Using the Automatic Identification System (AIS) ships identify themselves intermittently by broadcasting their location information. However, traditionally radars are used as the primary source of surveillance and AIS is considered as a supplement with a little interaction between these data sets. The data from AIS is much more accurate than radar data with practically no false alarms. But unlike the radar data, the AIS measurements arrive unpredictably, depending on the type and behavior of a ship. The AIS data includes target IDs that can be associated to initialized tracks. In multitarget maritime surveillance environment, for some targets the revisit interval form the AIS could be very large. In addition, the revisit intervals for various targets can be different. In this paper, we proposed a joint probabilistic data association based tracking algorithm that addresses the aforementioned issues to fuse the radar measurements with AIS data. Multiple AIS IDs are assigned to a track, with probabilities updated by both AIS and radar measurements to resolve the ambiguity in the AIS ID source. Experimental results based on simulated data demonstrate the performance the proposed technique.
Abdolmaleki, Azizeh; Ghasemi, Jahan B
2017-01-01
Finding high quality beginning compounds is a critical job at the start of the lead generation stage for multi-target drug discovery (MTDD). Designing hybrid compounds as selective multitarget chemical entity is a challenge, opportunity, and new idea to better act against specific multiple targets. One hybrid molecule is formed by two (or more) pharmacophore group's participation. So, these new compounds often exhibit two or more activities going about as multi-target drugs (mtdrugs) and may have superior safety or efficacy. Application of integrating a range of information and sophisticated new in silico, bioinformatics, structural biology, pharmacogenomics methods may be useful to discover/design, and synthesis of the new hybrid molecules. In this regard, many rational and screening approaches have followed by medicinal chemists for the lead generation in MTDD. Here, we review some popular lead generation approaches that have been used for designing multiple ligands (DMLs). This paper focuses on dual- acting chemical entities that incorporate a part of two drugs or bioactive compounds to compose hybrid molecules. Also, it presents some of key concepts and limitations/strengths of lead generation methods by comparing combination framework method with screening approaches. Besides, a number of examples to represent applications of hybrid molecules in the drug discovery are included. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The design and implementation of hydrographical information management system (HIMS)
NASA Astrophysics Data System (ADS)
Sui, Haigang; Hua, Li; Wang, Qi; Zhang, Anming
2005-10-01
With the development of hydrographical work and information techniques, the large variety of hydrographical information including electronic charts, documents and other materials are widely used, and the traditional management mode and techniques are unsuitable for the development of the Chinese Marine Safety Administration Bureau (CMSAB). How to manage all kinds of hydrographical information has become an important and urgent problem. A lot of advanced techniques including GIS, RS, spatial database management and VR techniques are introduced for solving these problems. Some design principles and key techniques of the HIMS including the mixed mode base on B/S, C/S and stand-alone computer mode, multi-source & multi-scale data organization and management, multi-source data integration and diverse visualization of digital chart, efficient security control strategies are illustrated in detail. Based on the above ideas and strategies, an integrated system named Hydrographical Information Management System (HIMS) was developed. And the HIMS has been applied in the Shanghai Marine Safety Administration Bureau and obtained good evaluation.
Mátyus, Péter; Chai, Christina L L
2016-06-20
Multitargeting is a valuable concept in drug design for the development of effective drugs for the treatment of multifactorial diseases. This concept has most frequently been realized by incorporating two or more pharmacophores into a single hybrid molecule. Many such hybrids, due to the increased molecular size, exhibit unfavorable physicochemical properties leading to adverse effects and/or an inappropriate ADME (absorption, distribution, metabolism, and excretion) profile. To avoid this limitation and achieve additional therapeutic benefits, here we describe a novel multitargeting strategy based on the synergistic effects of a parent drug and its active metabolite(s). The concept of metabolism-activated multitargeting (MAMUT) is illustrated using a number of examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Castrignanò, Annamaria; Quarto, Ruggiero; Vitti, Carolina; Langella, Giuliano; Terribile, Fabio
2017-01-01
To assess spatial variability at the very fine scale required by Precision Agriculture, different proximal and remote sensors have been used. They provide large amounts and different types of data which need to be combined. An integrated approach, using multivariate geostatistical data-fusion techniques and multi-source geophysical sensor data to determine simple summary scale-dependent indices, is described here. These indices can be used to delineate management zones to be submitted to differential management. Such a data fusion approach with geophysical sensors was applied in a soil of an agronomic field cropped with tomato. The synthetic regionalized factors determined, contributed to split the 3D edaphic environment into two main horizontal structures with different hydraulic properties and to disclose two main horizons in the 0–1.0-m depth with a discontinuity probably occurring between 0.40 m and 0.70 m. Comparing this partition with the soil properties measured with a shallow sampling, it was possible to verify the coherence in the topsoil between the dielectric properties and other properties more directly related to agronomic management. These results confirm the advantages of using proximal sensing as a preliminary step in the application of site-specific management. Combining disparate spatial data (data fusion) is not at all a naive problem and novel and powerful methods need to be developed. PMID:29207510
Castrignanò, Annamaria; Buttafuoco, Gabriele; Quarto, Ruggiero; Vitti, Carolina; Langella, Giuliano; Terribile, Fabio; Venezia, Accursio
2017-12-03
To assess spatial variability at the very fine scale required by Precision Agriculture, different proximal and remote sensors have been used. They provide large amounts and different types of data which need to be combined. An integrated approach, using multivariate geostatistical data-fusion techniques and multi-source geophysical sensor data to determine simple summary scale-dependent indices, is described here. These indices can be used to delineate management zones to be submitted to differential management. Such a data fusion approach with geophysical sensors was applied in a soil of an agronomic field cropped with tomato. The synthetic regionalized factors determined, contributed to split the 3D edaphic environment into two main horizontal structures with different hydraulic properties and to disclose two main horizons in the 0-1.0-m depth with a discontinuity probably occurring between 0.40 m and 0.70 m. Comparing this partition with the soil properties measured with a shallow sampling, it was possible to verify the coherence in the topsoil between the dielectric properties and other properties more directly related to agronomic management. These results confirm the advantages of using proximal sensing as a preliminary step in the application of site-specific management. Combining disparate spatial data (data fusion) is not at all a naive problem and novel and powerful methods need to be developed.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
Long-term monitoring on environmental disasters using multi-source remote sensing technique
NASA Astrophysics Data System (ADS)
Kuo, Y. C.; Chen, C. F.
2017-12-01
Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.
Dresen, S; Ferreirós, N; Gnann, H; Zimmermann, R; Weinmann, W
2010-04-01
The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).
General practitioner registrars' experiences of multisource feedback: a qualitative study.
Findlay, Nigel
2012-09-01
To explore the experiences of general practitioner (GP) specialty training registrars, thereby generating more understanding of the ways in which multisource feedback impacts upon their self-perceptions and professional behaviour, and provide information that might guide its use in the revalidation process of practising GPs. Complete transcripts of semi-structured, audio-taped qualitative interviews were analysed using the constant comparative method, to describe the experiences of multisource feedback for individual registrars. Five GP registrars participated. The first theme to emerge was the importance of the educational supervisor in encouraging the registrar through the emotional response, then facilitating interpretation of feedback and personal development. The second was the differing attitudes to learning and development, which may be in conflict with threats to self-image. The current RCGP format for obtaining multisource feedback for GP registrars may not always be achieving its purpose of challenging self-perceptions and motivating improved performance. An enhanced qualitative approach, through personal interviews rather than anonymous questionnaires, may provide a more accurate picture. This would address the concerns of some registrars by reducing their logistical burden and may facilitate more constructive feedback. The educational supervisor has an important role in promoting personal development, once this feedback is shared. The challenge for teaching organisations is to create a climate of comfort for learning, yet encourage learning beyond a 'comfort zone'.
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
USDA-ARS?s Scientific Manuscript database
In the midst of this genomics era, major plant genome databases are collecting massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc., as well as textual descriptions of many of these entities. While basic browsing and sear...
Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yunsong; Schuster, Gerard T.
The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.
Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data
Huang, Yunsong; Schuster, Gerard T.
2017-10-26
The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.
Engineered bifunctional proteins and stem cells: next generation of targeted cancer therapeutics.
Choi, Sung Hugh; Shah, Khalid
2016-09-01
Redundant survival signaling pathways and their crosstalk within tumor and/or between tumor and their microenvironment are key impediments to developing effective targeted therapies for cancer. Therefore developing therapeutics that target multiple receptor signaling pathways in tumors and utilizing efficient platforms to deliver such therapeutics are critical to the success of future targeted therapies. During the past two decades, a number of bifunctional multi-targeting antibodies, fusion proteins, and oncolytic viruses have been developed and various stem cell types have been engineered to efficiently deliver them to tumors. In this review, we discuss the design and efficacy of therapeutics targeting multiple pathways in tumors and the therapeutic potential of therapeutic stem cells engineered with bifunctional agents.
Kaur, Gaganpreet; Kaur, Maninder; Silakari, Om
2014-01-01
The recent research area endeavors to discover ultimate multi-target ligands, an increasingly feasible and attractive alternative to existing mono-targeted drugs for treatment of complex, multi-factorial inflammation process which underlays plethora of debilitated health conditions. In order to improvise this option, exploration of relevant chemical core scaffold will be an utmost need. Privileged benzimidazole scaffold being historically versatile structural motif could offer a viable starting point in the search for novel multi-target ligands against multi-factorial inflammation process since, when appropriately substituted, it can selectively modulate diverse receptors, pathways and enzymes associated with the pathogenesis of inflammation. Despite this remarkable capability, the multi-target capacity of the benzimidazole scaffold remains largely unexploited. With this in focus, the present review article attempts to provide synopsis of published research to exemplify the valuable use of benzimidazole nucleus and focus on their suitability as starting scaffold to develop multi-targeted anti-inflammatory ligands.
NASA Astrophysics Data System (ADS)
Li, J.; Wen, G.; Li, D.
2018-04-01
Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.
Objected-oriented remote sensing image classification method based on geographic ontology model
NASA Astrophysics Data System (ADS)
Chu, Z.; Liu, Z. J.; Gu, H. Y.
2016-11-01
Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
ERIC Educational Resources Information Center
Goldring, Ellen B.; Mavrogordato, Madeline; Haynes, Katherine Taylor
2015-01-01
Purpose: A relatively new approach to principal evaluation is the use of multisource feedback, which typically entails a leader's self-evaluation as well as parallel evaluations from subordinates, peers, and/or superiors. However, there is little research on how principals interact with evaluation data from multisource feedback systems. This…
Yang, Guanxue; Wang, Lin; Wang, Xiaofan
2017-06-07
Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.
NASA Astrophysics Data System (ADS)
Davenport, Jack H.
2016-05-01
Intelligence analysts demand rapid information fusion capabilities to develop and maintain accurate situational awareness and understanding of dynamic enemy threats in asymmetric military operations. The ability to extract relationships between people, groups, and locations from a variety of text datasets is critical to proactive decision making. The derived network of entities must be automatically created and presented to analysts to assist in decision making. DECISIVE ANALYTICS Corporation (DAC) provides capabilities to automatically extract entities, relationships between entities, semantic concepts about entities, and network models of entities from text and multi-source datasets. DAC's Natural Language Processing (NLP) Entity Analytics model entities as complex systems of attributes and interrelationships which are extracted from unstructured text via NLP algorithms. The extracted entities are automatically disambiguated via machine learning algorithms, and resolution recommendations are presented to the analyst for validation; the analyst's expertise is leveraged in this hybrid human/computer collaborative model. Military capability is enhanced by these NLP Entity Analytics because analysts can now create/update an entity profile with intelligence automatically extracted from unstructured text, thereby fusing entity knowledge from structured and unstructured data sources. Operational and sustainment costs are reduced since analysts do not have to manually tag and resolve entities.
NASA Astrophysics Data System (ADS)
Heitlager, Ilja; Helms, Remko; Brinkkemper, Sjaak
Information Technology Outsourcing practice and research mainly considers the outsourcing phenomenon as a generic fulfilment of the IT function by external parties. Inspired by the logic of commodity, core competencies and economies of scale; assets, existing departments and IT functions are transferred to external parties. Although the generic approach might work for desktop outsourcing, where standardisation is the dominant factor, it does not work for the management of mission critical applications. Managing mission critical applications requires a different approach where building relationships is critical. The relationships involve inter and intra organisational parties in a multi-sourcing arrangement, called an IT service chain, consisting of multiple (specialist) parties that have to collaborate closely to deliver high quality services.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Sun, Xinglong; Xu, Tingfa; Zhang, Jizhou; Zhao, Zishu; Li, Yuankun
2017-07-26
In this paper, we propose a novel automatic multi-target registration framework for non-planar infrared-visible videos. Previous approaches usually analyzed multiple targets together and then estimated a global homography for the whole scene, however, these cannot achieve precise multi-target registration when the scenes are non-planar. Our framework is devoted to solving the problem using feature matching and multi-target tracking. The key idea is to analyze and register each target independently. We present a fast and robust feature matching strategy, where only the features on the corresponding foreground pairs are matched. Besides, new reservoirs based on the Gaussian criterion are created for all targets, and a multi-target tracking method is adopted to determine the relationships between the reservoirs and foreground blobs. With the matches in the corresponding reservoir, the homography of each target is computed according to its moving state. We tested our framework on both public near-planar and non-planar datasets. The results demonstrate that the proposed framework outperforms the state-of-the-art global registration method and the manual global registration matrix in all tested datasets.
Automatic snow extent extraction in alpine environments: short and medium term 2000-2006 analysis
NASA Astrophysics Data System (ADS)
Gamba, P.; Lisini, G.; Merlin, E.; Riva, F.
2007-10-01
Water resources in Northern Italy has dramatically shortened in the past 10 to 20 years, and recent phenomena connected to the climate change have further sharpened the trend. To match the observable and collected information with this experience and find methodologies to improve the water management cycle in the Lombardy Region, University of Milan Bicocca, Fondazione Lombardia per l'Ambiente and ARPA Lombardia are currently funding a project, named "Regional Impact of Climatic Change in Lombardy Water Resources: Modelling and Applications" (RICLIC-WARM). In the framework of this project, the analysis of the fraction of water available and provided to the whole regional network by the snow cover of the Alps will be investigated by means of remotely sensed data. While there are already a number of algorithms devoted to this task for data coming from various and different sensors in the visible and infrared regions, no operative comparison and analytical analysis of the advantages and drawbacks of using different data has been attempted. This idea will pave the way for a fusion of the available information as well as a multi-source mapping procedure which will be able to exploit successfully the huge quantity of data available for the past and the even larger amount that may be accessed in the future. To this aim, a comparison on selected dates for the whole 2000/2006 period was performed.
A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking
NASA Astrophysics Data System (ADS)
Hussein, I.; MacMillan, R.
2014-09-01
Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.
NASA Astrophysics Data System (ADS)
Gao, M.; Huang, S. T.; Wang, P.; Zhao, Y. A.; Wang, H. B.
2016-11-01
The geological disposal of high-level radioactive waste (hereinafter referred to "geological disposal") is a long-term, complex, and systematic scientific project, whose data and information resources in the research and development ((hereinafter referred to ”R&D”) process provide the significant support for R&D of geological disposal system, and lay a foundation for the long-term stability and safety assessment of repository site. However, the data related to the research and engineering in the sitting of the geological disposal repositories is more complicated (including multi-source, multi-dimension and changeable), the requirements for the data accuracy and comprehensive application has become much higher than before, which lead to the fact that the data model design of geo-information database for the disposal repository are facing more serious challenges. In the essay, data resources of the pre-selected areas of the repository has been comprehensive controlled and systematic analyzed. According to deeply understanding of the application requirements, the research work has made a solution for the key technical problems including reasonable classification system of multi-source data entity, complex logic relations and effective physical storage structures. The new solution has broken through data classification and conventional spatial data the organization model applied in the traditional industry, realized the data organization and integration with the unit of data entities and spatial relationship, which were independent, holonomic and with application significant features in HLW geological disposal. The reasonable, feasible and flexible data conceptual models, logical models and physical models have been established so as to ensure the effective integration and facilitate application development of multi-source data in pre-selected areas for geological disposal.
Information Weighted Consensus for Distributed Estimation in Vision Networks
ERIC Educational Resources Information Center
Kamal, Ahmed Tashrif
2013-01-01
Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…
Grassland Npp Monitoring Based on Multi-Source Remote Sensing Data Fusion
NASA Astrophysics Data System (ADS)
Cai, Y. R.; Zheng, J. H.; Du, M. J.; Mu, C.; Peng, J.
2018-04-01
Vegetation is an important part of the terrestrial ecosystem. It plays an important role in the energy and material exchange of the ground-atmosphere system and is a key part of the global carbon cycle process.Climate change has an important influence on the carbon cycle of terrestrial ecosystems. Net Primary Productivity (Net Primary Productivity)is an important parameter for evaluating global terrestrial ecosystems. For the Xinjiang region, the study of grassland NPP has gradually become a hot issue in the ecological environment.Increasing the estimation accuracy of NPP is of great significance to the development of the ecosystem in Xinjiang. Based on the third-generation GIMMS AVHRR NDVI global vegetation dataset and the MODIS NDVI (MOD13A3) collected each month by the United States Atmospheric and Oceanic Administration (NOAA),combining the advantages of different remotely sensed datasets, this paper obtained the maximum synthesis fusion for New normalized vegetation index (NDVI) time series in 2006-2015.Analysis of Net Primary Productivity of Grassland Vegetation in Xinjiang Using Improved CASA Model The method described in this article proves the feasibility of applying data processing, and the accuracy of the NPP calculation using the fusion processed NDVI has been greatly improved. The results show that: (1) The NPP calculated from the new normalized vegetation index (NDVI) obtained from the fusion of GIMMS AVHRR NDVI and MODIS NDVI is significantly higher than the NPP calculated from these two raw data; (2) The grassland NPP in Xinjiang Interannual changes show an overall increase trend; interannual changes in NPP have a certain relationship with precipitation.
Multisource least-squares reverse-time migration with structure-oriented filtering
NASA Astrophysics Data System (ADS)
Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong
2016-09-01
The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.
Zheng, Chunli; Wang, Jinan; Liu, Jianling; Pei, Mengjie; Huang, Chao; Wang, Yonghua
2014-08-01
The term systems pharmacology describes a field of study that uses computational and experimental approaches to broaden the view of drug actions rooted in molecular interactions and advance the process of drug discovery. The aim of this work is to stick out the role that the systems pharmacology plays across the multi-target drug discovery from natural products for cardiovascular diseases (CVDs). Firstly, based on network pharmacology methods, we reconstructed the drug-target and target-target networks to determine the putative protein target set of multi-target drugs for CVDs treatment. Secondly, we reintegrated a compound dataset of natural products and then obtained a multi-target compounds subset by virtual-screening process. Thirdly, a drug-likeness evaluation was applied to find the ADME-favorable compounds in this subset. Finally, we conducted in vitro experiments to evaluate the reliability of the selected chemicals and targets. We found that four of the five randomly selected natural molecules can effectively act on the target set for CVDs, indicating the reasonability of our systems-based method. This strategy may serve as a new model for multi-target drug discovery of complex diseases.
Satisfaction Formation Processes in Library Users: Understanding Multisource Effects
ERIC Educational Resources Information Center
Shi, Xi; Holahan, Patricia J.; Jurkat, M. Peter
2004-01-01
This study explores whether disconfirmation theory can explain satisfaction formation processes in library users. Both library users' needs and expectations are investigated as disconfirmation standards. Overall library user satisfaction is predicted to be a function of two independent sources--satisfaction with the information product received…
NASA Technical Reports Server (NTRS)
Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.
1990-01-01
Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.
Challenges with secondary use of multi-source water-quality data in the United States
Sprague, Lori A.; Oelsner, Gretchen P.; Argue, Denise M.
2017-01-01
Combining water-quality data from multiple sources can help counterbalance diminishing resources for stream monitoring in the United States and lead to important regional and national insights that would not otherwise be possible. Individual monitoring organizations understand their own data very well, but issues can arise when their data are combined with data from other organizations that have used different methods for reporting the same common metadata elements. Such use of multi-source data is termed “secondary use”—the use of data beyond the original intent determined by the organization that collected the data. In this study, we surveyed more than 25 million nutrient records collected by 488 organizations in the United States since 1899 to identify major inconsistencies in metadata elements that limit the secondary use of multi-source data. Nearly 14.5 million of these records had missing or ambiguous information for one or more key metadata elements, including (in decreasing order of records affected) sample fraction, chemical form, parameter name, units of measurement, precise numerical value, and remark codes. As a result, metadata harmonization to make secondary use of these multi-source data will be time consuming, expensive, and inexact. Different data users may make different assumptions about the same ambiguous data, potentially resulting in different conclusions about important environmental issues. The value of these ambiguous data is estimated at \\$US12 billion, a substantial collective investment by water-resource organizations in the United States. By comparison, the value of unambiguous data is estimated at \\$US8.2 billion. The ambiguous data could be preserved for uses beyond the original intent by developing and implementing standardized metadata practices for future and legacy water-quality data throughout the United States.
NASA Astrophysics Data System (ADS)
Renschler, Chris S.; Wang, Zhihao
2017-10-01
In light of climate and land use change, stakeholders around the world are interested in assessing historic and likely future flood dynamics and flood extents for decision-making in watersheds with dams as well as limited availability of stream gages and costly technical resources. This research evaluates an assessment and communication approach of combining GIS, hydraulic modeling based on latest remote sensing and topographic imagery by comparing the results to an actual flood event and available stream gages. On August 28th 2011, floods caused by Hurricane Irene swept through a large rural area in New York State, leaving thousands of people homeless, devastating towns and cities. Damage was widespread though the estimated and actual floods inundation and associated return period were still unclear since the flooding was artificially increased by flood water release due to fear of a dam break. This research uses the stream section right below the dam between two stream gages North Blenheim and Breakabeen along Schoharie Creek as a case study site to validate the approach. The data fusion approach uses a GIS, commonly available data sources, the hydraulic model HEC-RAS as well as airborne LiDAR data that were collected two days after the flood event (Aug 30, 2011). The aerial imagery of the airborne survey depicts a low flow event as well as the evidence of the record flood such as debris and other signs of damage to validate the hydrologic simulation results with the available stream gauges. Model results were also compared to the official Federal Emergency Management Agency (FEMA) flood scenarios to determine the actual flood return period of the event. The dynamic of the flood levels was then used to visualize the flood and the actual loss of the Old Blenheim Bridge using Google Sketchup. Integration of multi-source data, cross-validation and visualization provides new ways to utilize pre- and post-event remote sensing imagery and hydrologic models to better understand and communicate the complex spatial-temporal dynamics, return periods and potential/actual consequences to decision-makers and the local population.
Favia, Angelo D; Habrant, Damien; Scarpelli, Rita; Migliore, Marco; Albani, Clara; Bertozzi, Sine Mandrup; Dionisi, Mauro; Tarozzo, Glauco; Piomelli, Daniele; Cavalli, Andrea; De Vivo, Marco
2012-10-25
Pain and inflammation are major therapeutic areas for drug discovery. Current drugs for these pathologies have limited efficacy, however, and often cause a number of unwanted side effects. In the present study, we identify the nonsteroidal anti-inflammatory drug carprofen as a multitarget-directed ligand that simultaneously inhibits cyclooxygenase-1 (COX-1), COX-2, and fatty acid amide hydrolase (FAAH). Additionally, we synthesized and tested several derivatives of carprofen, sharing this multitarget activity. This may result in improved analgesic efficacy and reduced side effects (Naidu et al. J. Pharmacol. Exp. Ther.2009, 329, 48-56; Fowler, C. J.; et al. J. Enzyme Inhib. Med. Chem.2012, in press; Sasso et al. Pharmacol. Res.2012, 65, 553). The new compounds are among the most potent multitarget FAAH/COX inhibitors reported so far in the literature and thus may represent promising starting points for the discovery of new analgesic and anti-inflammatory drugs.
On Meaningful Measurement: Concepts, Technology and Examples.
ERIC Educational Resources Information Center
Cheung, K. C.
This paper discusses how concepts and procedural skills in problem-solving tasks, as well as affects and emotions, can be subjected to meaningful measurement (MM), based on a multisource model of learning and a constructivist information-processing theory of knowing. MM refers to the quantitative measurement of conceptual and procedural knowledge…
Cross-Modulation Interference with Lateralization of Mixed-Modulated Waveforms
ERIC Educational Resources Information Center
Hsieh, I-Hui; Petrosyan, Agavni; Goncalves, Oscar F.; Hickok, Gregory; Saberi, Kourosh
2010-01-01
Purpose: This study investigated the ability to use spatial information in mixed-modulated (MM) sounds containing concurrent frequency-modulated (FM) and amplitude-modulated (AM) sounds by exploring patterns of interference when different modulation types originated from different loci as may occur in a multisource acoustic field. Method:…
Ambure, Pravin; Bhat, Jyotsna; Puzyn, Tomasz; Roy, Kunal
2018-04-23
Alzheimer's disease (AD) is a multi-factorial disease, which can be simply outlined as an irreversible and progressive neurodegenerative disorder with an unclear root cause. It is a major cause of dementia in old aged people. In the present study, utilizing the structural and biological activity information of ligands for five important and mostly studied vital targets (i.e. cyclin-dependant kinase 5, β-secretase, monoamine oxidase B, glycogen synthase kinase 3β, acetylcholinesterase) that are believed to be effective against AD, we have developed five classification models using linear discriminant analysis (LDA) technique. Considering the importance of data curation, we have given more attention towards the chemical and biological data curation, which is a difficult task especially in case of big data-sets. Thus, to ease the curation process we have designed Konstanz Information Miner (KNIME) workflows, which are made available at http://teqip.jdvu.ac.in/QSAR_Tools/ . The developed models were appropriately validated based on the predictions for experiment derived data from test sets, as well as true external set compounds including known multi-target compounds. The domain of applicability for each classification model was checked based on a confidence estimation approach. Further, these validated models were employed for screening of natural compounds collected from the InterBioScreen natural database ( https://www.ibscreen.com/natural-compounds ). Further, the natural compounds that were categorized as 'actives' in at least two classification models out of five developed models were considered as multi-target leads, and these compounds were further screened using the drug-like filter, molecular docking technique and then thoroughly analyzed using molecular dynamics studies. Finally, the most potential multi-target natural compounds against AD are suggested.
A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter
2015-06-27
function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically
Application of Ontology Technology in Health Statistic Data Analysis.
Guo, Minjiang; Hu, Hongpu; Lei, Xingyun
2017-01-01
Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.
Sharma, Megha; Sharma, Kusum; Sharma, Aman; Gupta, Nalini; Rajwanshi, Arvind
2016-09-01
Tuberculous lymphadenitis (TBLA), the most common presentation of tuberculosis, poses a significant diagnostic challenge in the developing countries. Timely, accurate and cost-effective diagnosis can decrease the high morbidity associated with TBLA especially in resource-poor high-endemic regions. The loop-mediated isothermal amplification assay (LAMP), using two targets, was evaluated for the diagnosis of TBLA. LAMP assay using 3 sets of primers (each for IS6110 and MPB64) was performed on 170 fine needle aspiration samples (85 confirmed, 35 suspected, 50 control cases of TBLA). Results were compared against IS6110 PCR, cytology, culture and smear. The overall sensitivity and specificity of LAMP assay, using multi-targeted approach, was 90% and 100% respectively in diagnosing TBLA. The sensitivity of multi-targeted LAMP, only MPB64 LAMP, only IS6110 LAMP and IS6110 PCR was 91.7%, 89.4%, 84.7% and 75.2%, respectively among confirmed cases and 85.7%, 77.1%, 68.5% and 60%, respectively among suspected cases of TBLA. Additional 12/120 (10%) cases were detected using multi-targeted method. The multi-targeted LAMP, with its speedy and reliable results, is a potential diagnostic test for TBLA in low-resource countries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fusion of multi-source remote sensing data for agriculture monitoring tasks
NASA Astrophysics Data System (ADS)
Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.
2016-12-01
Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.
2013-12-14
population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC
A new FOD recognition algorithm based on multi-source information fusion and experiment analysis
NASA Astrophysics Data System (ADS)
Li, Yu; Xiao, Gang
2011-08-01
Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.
Malling, Bente; Mortensen, Lene; Bonderup, Thomas; Scherpbier, Albert; Ringsted, Charlotte
2009-12-10
Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time. Study participants were consultants responsible for postgraduate medical education at clinical departments. pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances. There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149). The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting.
Multi-targeted priming for genome-wide gene expression assays.
Adomas, Aleksandra B; Lopez-Giraldez, Francesc; Clark, Travis A; Wang, Zheng; Townsend, Jeffrey P
2010-08-17
Complementary approaches to assaying global gene expression are needed to assess gene expression in regions that are poorly assayed by current methodologies. A key component of nearly all gene expression assays is the reverse transcription of transcribed sequences that has traditionally been performed by priming the poly-A tails on many of the transcribed genes in eukaryotes with oligo-dT, or by priming RNA indiscriminately with random hexamers. We designed an algorithm to find common sequence motifs that were present within most protein-coding genes of Saccharomyces cerevisiae and of Neurospora crassa, but that were not present within their ribosomal RNA or transfer RNA genes. We then experimentally tested whether degenerately priming these motifs with multi-targeted primers improved the accuracy and completeness of transcriptomic assays. We discovered two multi-targeted primers that would prime a preponderance of genes in the genomes of Saccharomyces cerevisiae and Neurospora crassa while avoiding priming ribosomal RNA or transfer RNA. Examining the response of Saccharomyces cerevisiae to nitrogen deficiency and profiling Neurospora crassa early sexual development, we demonstrated that using multi-targeted primers in reverse transcription led to superior performance of microarray profiling and next-generation RNA tag sequencing. Priming with multi-targeted primers in addition to oligo-dT resulted in higher sensitivity, a larger number of well-measured genes and greater power to detect differences in gene expression. Our results provide the most complete and detailed expression profiles of the yeast nitrogen starvation response and N. crassa early sexual development to date. Furthermore, our multi-targeting priming methodology for genome-wide gene expression assays provides selective targeting of multiple sequences and counter-selection against undesirable sequences, facilitating a more complete and precise assay of the transcribed sequences within the genome.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Blackman, Gabrielle L.; Ostrander, Rick; Herman, Keith C.
2005-01-01
Although ADHD and depression are common comorbidities in youth, few studies have examined this particular clinical presentation. To address method bias limitations of previous research, this study uses multiple informants to compare the academic, social, and clinical functioning of children with ADHD, children with ADHD and depression, and…
ERIC Educational Resources Information Center
Sargeant, Joan; MacLeod, Tanya; Sinclair, Douglas; Power, Mary
2011-01-01
Introduction: The Colleges of Physicians and Surgeons of Alberta and Nova Scotia (CPSNS) use a standardized multisource feedback program, the Physician Achievement Review (PAR/NSPAR), to provide physicians with performance assessment data via questionnaires from medical colleagues, coworkers, and patients on 5 practice domains: consultation…
NASA Astrophysics Data System (ADS)
Luo, Qiu; Xin, Wu; Qiming, Xiong
2017-06-01
In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.
Li, Jian; Yu, Haiyang; Wang, Sijian; Wang, Wei; Chen, Qian; Ma, Yanmin; Zhang, Yi; Wang, Tao
2018-01-01
Imbalanced hepatic glucose homeostasis is one of the critical pathologic events in the development of metabolic syndromes (MSs). Therefore, regulation of imbalanced hepatic glucose homeostasis is important in drug development for MS treatment. In this review, we discuss the major targets that regulate hepatic glucose homeostasis in human physiologic and pathophysiologic processes, involving hepatic glucose uptake, glycolysis and glycogen synthesis, and summarize their changes in MSs. Recent literature suggests the necessity of multitarget drugs in the management of MS disorder for regulation of imbalanced glucose homeostasis in both experimental models and MS patients. Here, we highlight the potential bioactive compounds from natural products with medicinal or health care values, and focus on polypharmacologic and multitarget natural products with effects on various signaling pathways in hepatic glucose metabolism. This review shows the advantage and feasibility of discovering multicompound-multitarget drugs from natural products, and providing a new perspective of ways on drug and functional food development for MSs.
Wang, Sijian; Wang, Wei; Chen, Qian; Ma, Yanmin; Zhang, Yi; Wang, Tao
2018-01-01
Imbalanced hepatic glucose homeostasis is one of the critical pathologic events in the development of metabolic syndromes (MSs). Therefore, regulation of imbalanced hepatic glucose homeostasis is important in drug development for MS treatment. In this review, we discuss the major targets that regulate hepatic glucose homeostasis in human physiologic and pathophysiologic processes, involving hepatic glucose uptake, glycolysis and glycogen synthesis, and summarize their changes in MSs. Recent literature suggests the necessity of multitarget drugs in the management of MS disorder for regulation of imbalanced glucose homeostasis in both experimental models and MS patients. Here, we highlight the potential bioactive compounds from natural products with medicinal or health care values, and focus on polypharmacologic and multitarget natural products with effects on various signaling pathways in hepatic glucose metabolism. This review shows the advantage and feasibility of discovering multicompound–multitarget drugs from natural products, and providing a new perspective of ways on drug and functional food development for MSs. PMID:29391777
The application of the geography census data in seismic hazard assessment
NASA Astrophysics Data System (ADS)
Yuan, Shen; Ying, Zhang
2017-04-01
Limited by basic data timeliness to earthquake emergency database in Sichuan province, after the earthquake disaster assessment results and the actual damage there is a certain gap. In 2015, Sichuan completed the province census for the first time which including topography, traffic, vegetation coverage, water area, desert and bare ground, traffic network, the census residents and facilities, geographical unit, geological hazard as well as the Lushan earthquake-stricken area's town planning construction and ecological environment restoration. On this basis, combining with the existing achievements of basic geographic information data and high resolution image data, supplemented by remote sensing image interpretation and geological survey, Carried out distribution and change situation of statistical analysis and information extraction for earthquake disaster hazard-affected body elements such as surface coverage, roads, structures infrastructure in Lushan county before 2013 after 2015. At the same time, achieved the transformation and updating from geographical conditions census data to earthquake emergency basic data through research their data type, structure and relationship. Finally, based on multi-source disaster information including hazard-affected body changed data and Lushan 7.0 magnitude earthquake CORS network coseismal displacement field, etc. obtaining intensity control points through information fusion. Then completed the seismic influence field correction and assessed earthquake disaster again through Sichuan earthquake relief headquarters technology platform. Compared the new assessment result,original assessment result and actual earthquake disaster loss which shows that the revised evaluation result is more close to the actual earthquake disaster loss. In the future can realize geographical conditions census data to earthquake emergency basic data's normalized updates, ensure the timeliness to earthquake emergency database meanwhile improve the accuracy of assessment of earthquake disaster constantly.
NASA Technical Reports Server (NTRS)
Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.
1993-01-01
Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Specialty-specific multi-source feedback: assuring validity, informing training.
Davies, Helena; Archer, Julian; Bateman, Adrian; Dewar, Sandra; Crossley, Jim; Grant, Janet; Southgate, Lesley
2008-10-01
The white paper 'Trust, Assurance and Safety: the Regulation of Health Professionals in the 21st Century' proposes a single, generic multi-source feedback (MSF) instrument in the UK. Multi-source feedback was proposed as part of the assessment programme for Year 1 specialty training in histopathology. An existing instrument was modified following blueprinting against the histopathology curriculum to establish content validity. Trainees were also assessed using an objective structured practical examination (OSPE). Factor analysis and correlation between trainees' OSPE performance and the MSF were used to explore validity. All 92 trainees participated and the assessor response rate was 93%. Reliability was acceptable with eight assessors (95% confidence interval 0.38). Factor analysis revealed two factors: 'generic' and 'histopathology'. Pearson correlation of MSF scores with OSPE performances was 0.48 (P = 0.001) and the histopathology factor correlated more highly (histopathology r = 0.54, generic r = 0.42; t = - 2.76, d.f. = 89, P < 0.01). Trainees scored least highly in relation to ability to use histopathology to solve clinical problems (mean = 4.39) and provision of good reports (mean = 4.39). Three of six doctors whose means were < 4.0 received free text comments about report writing. There were 83 forms with aggregate scores of < 4. Of these, 19.2% included comments about report writing. Specialty-specific MSF is feasible and achieves satisfactory reliability. The higher correlation of the 'histopathology' factor with the OSPE supports validity. This paper highlights the importance of validating an MSF instrument within the specialty-specific context as, in addition to assuring content validity, the PATH-SPRAT (Histopathology-Sheffield Peer Review Assessment Tool) also demonstrates the potential to inform training as part of a quality improvement model.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel
2013-01-01
Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.
Using multilevel, multisource needs assessment data for planning community interventions.
Levy, Susan R; Anderson, Emily E; Issel, L Michele; Willis, Marilyn A; Dancy, Barbara L; Jacobson, Kristin M; Fleming, Shirley G; Copper, Elizabeth S; Berrios, Nerida M; Sciammarella, Esther; Ochoa, Mónica; Hebert-Beirne, Jennifer
2004-01-01
African Americans and Latinos share higher rates of cardiovascular disease (CVD) and diabetes compared with Whites. These diseases have common risk factors that are amenable to primary and secondary prevention. The goal of the Chicago REACH 2010-Lawndale Health Promotion Project is to eliminate disparities related to CVD and diabetes experienced by African Americans and Latinos in two contiguous Chicago neighborhoods using a community-based prevention approach. This article shares findings from the Phase 1 participatory planning process and discusses the implications these findings and lessons learned may have for programs aiming to reduce health disparities in multiethnic communities. The triangulation of data sources from the planning phase enriched interpretation and led to more creative and feasible suggestions for programmatic interventions across the four levels of the ecological framework. Multisource data yielded useful information for program planning and a better understanding of the cultural differences and similarities between African Americans and Latinos.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
2009-01-01
Background Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time. Methods Study participants were consultants responsible for postgraduate medical education at clinical departments. Study design: pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances. Results There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149). Conclusion The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting. PMID:20003311
Incomplete Multisource Transfer Learning.
Ding, Zhengming; Shao, Ming; Fu, Yun
2018-02-01
Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources.
Development of a Multi-Target Contingency Management Intervention for HIV Positive Substance Users.
Stitzer, Maxine; Calsyn, Donald; Matheson, Timothy; Sorensen, James; Gooden, Lauren; Metsch, Lisa
2017-01-01
Contingency management (CM) interventions generally target a single behavior such as attendance or drug use. However, disease outcomes are mediated by complex chains of both healthy and interfering behaviors enacted over extended periods of time. This paper describes a novel multi-target contingency management (CM) program developed for use with HIV positive substance users enrolled in a CTN multi-site study (0049 Project HOPE). Participants were randomly assigned to usual care (referral to health care and SUD treatment) or 6-months strength-based patient navigation interventions with (PN+CM) or without (PN only) the CM program. Primary outcome of the trial was viral load suppression at 12-months post-randomization. Up to $1160 could be earned over 6 months under escalating schedules of reinforcement. Earnings were divided among eight CM targets; two PN-related (PN visits; paperwork completion; 26% of possible earnings), four health-related (HIV care visits, lab blood draw visits, medication check, viral load suppression; 47% of possible earnings) and two drug-use abatement (treatment entry; submission of drug negative UAs; 27% of earnings). The paper describes rationale for selection of targets, pay amounts and pay schedules. The CM program was compatible with and fully integrated into the PN intervention. The study design will allow comparison of behavioral and health outcomes for participants receiving PN with and without CM; results will inform future multi-target CM development. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
Chadha, Navriti; Silakari, Om
2017-09-01
Diabetic complications is a complex metabolic disorder developed primarily due to prolonged hyperglycemia in the body. The complexity of the disease state as well as the unifying pathophysiology discussed in the literature reports exhibited that the use of multi-targeted agents with multiple complementary biological activities may offer promising therapy for the intervention of the disease over the single-target drugs. In the present study, novel thiazolidine-2,4-dione analogues were designed as multi-targeted agents implicated against the molecular pathways involved in diabetic complications using knowledge based as well as in-silico approaches such as pharmacophore mapping, molecular docking etc. The hit molecules were duly synthesized and biochemical estimation of these molecules against aldose reductase (ALR2), protein kinase Cβ (PKCβ) and poly (ADP-ribose) polymerase 1 (PARP-1) led to identification of compound 2 that showed good potency against PARP-1 and ALR2 enzymes. These positive results support the progress of a low cost multi-targeted agent with putative roles in diabetic complications. Copyright © 2017 Elsevier Inc. All rights reserved.
Advanced techniques for the storage and use of very large, heterogeneous spatial databases
NASA Technical Reports Server (NTRS)
Peuquet, Donna J.
1987-01-01
Progress is reported in the development of a prototype knowledge-based geographic information system. The overall purpose of this project is to investigate and demonstrate the use of advanced methods in order to greatly improve the capabilities of geographic information system technology in the handling of large, multi-source collections of spatial data in an efficient manner, and to make these collections of data more accessible and usable for the Earth scientist.
NASA Astrophysics Data System (ADS)
Albreht, Alen; Vovk, Irena; Mavri, Janez; Marco-Contelles, Jose; Ramsay, Rona R.
2018-05-01
Successful propargylamine drugs such as deprenyl inactivate monoamine oxidase (MAO), a target in multi-faceted approaches to prevent neurodegeneration in the aging population, but the chemical structure and mechanism of the irreversible inhibition are still debated. We characterized the covalent cyanine structure linking the multi-target propargylamine inhibitor ASS234 and the flavin adenine dinucleotide in MAO-A using a combination of ultra-high performance liquid chromatography, spectroscopy, mass spectrometry, and computational methods. The partial double bond character of the cyanine chain gives rise to 4 interconverting geometric isomers of the adduct which were chromatographically separated at low temperatures. The configuration of the cyanine linker governs adduct stability with segments of much higher flexibility and rigidity than previously hypothesized. The findings indicate the importance of intramolecular electrostatic interactions in the MAO binding site and provide key information relevant to incorporation of the propargyl moiety into novel multi-target drugs. Based on the structure, we propose a mechanism of MAO inactivation applicable to all propargylamine inhibitors.
Virtual target tracking (VTT) as applied to mobile satellite communication networks
NASA Astrophysics Data System (ADS)
Amoozegar, Farid
1999-08-01
Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.
Two-phase framework for near-optimal multi-target Lambert rendezvous
NASA Astrophysics Data System (ADS)
Bang, Jun; Ahn, Jaemyung
2018-03-01
This paper proposes a two-phase framework to obtain a near-optimal solution of multi-target Lambert rendezvous problem. The objective of the problem is to determine the minimum-cost rendezvous sequence and trajectories to visit a given set of targets within a maximum mission duration. The first phase solves a series of single-target rendezvous problems for all departure-arrival object pairs to generate the elementary solutions, which provides candidate rendezvous trajectories. The second phase formulates a variant of traveling salesman problem (TSP) using the elementary solutions prepared in the first phase and determines the final rendezvous sequence and trajectories of the multi-target rendezvous problem. The validity of the proposed optimization framework is demonstrated through an asteroid exploration case study.
NASA Astrophysics Data System (ADS)
Zhang, Kongwen; Hu, Baoxin; Robinson, Justin
2014-01-01
The emerald ash borer (EAB) poses a significant economic and environmental threat to ash trees in southern Ontario, Canada, and the northern states of the USA. It is critical that effective technologies are urgently developed to detect, monitor, and control the spread of EAB. This paper presents a methodology using multisourced data to predict potential infestations of EAB in the town of Oakville, Ontario, Canada. The information combined in this study includes remotely sensed data, such as high spatial resolution aerial imagery, commercial ground and airborne hyperspectral data, and Google Earth imagery, in addition to nonremotely sensed data, such as archived paper maps and documents. This wide range of data provides extensive information that can be used for early detection of EAB, yet their effective employment and use remain a significant challenge. A prediction function was developed to estimate the EAB infestation states of individual ash trees using three major attributes: leaf chlorophyll content, tree crown spatial pattern, and prior knowledge. Comparison between these predicted values and a ground-based survey demonstrated an overall accuracy of 62.5%, with 22.5% omission and 18.5% commission errors.
A multi-source feedback tool for measuring a subset of Pediatrics Milestones.
Schwartz, Alan; Margolis, Melissa J; Multerer, Sara; Haftel, Hilary M; Schumacher, Daniel J
2016-10-01
The Pediatrics Milestones Assessment Pilot employed a new multisource feedback (MSF) instrument to assess nine Pediatrics Milestones among interns and subinterns in the inpatient context. To report validity evidence for the MSF tool for informing milestone classification decisions. We obtained MSF instruments by different raters per learner per rotation. We present evidence for validity based on the unified validity framework. One hundred and ninety two interns and 41 subinterns at 18 Pediatrics residency programs received a total of 1084 MSF forms from faculty (40%), senior residents (34%), nurses (22%), and other staff (4%). Variance in ratings was associated primarily with rater (32%) and learner (22%). The milestone factor structure fit data better than simpler structures. In domains except professionalism, ratings by nurses were significantly lower than those by faculty and ratings by other staff were significantly higher. Ratings were higher when the rater observed the learner for longer periods and had a positive global opinion of the learner. Ratings of interns and subinterns did not differ, except for ratings by senior residents. MSF-based scales correlated with summative milestone scores. We obtain moderately reliable MSF ratings of interns and subinterns in the inpatient context to inform some milestone assignments.
Imputation for multisource data with comparison and assessment techniques
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
2017-12-27
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Imputation for multisource data with comparison and assessment techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Castro, Eduardo; Martínez-Ramón, Manel; Pearlson, Godfrey; Sui, Jing; Calhoun, Vince D.
2011-01-01
Pattern classification of brain imaging data can enable the automatic detection of differences in cognitive processes of specific groups of interest. Furthermore, it can also give neuroanatomical information related to the regions of the brain that are most relevant to detect these differences by means of feature selection procedures, which are also well-suited to deal with the high dimensionality of brain imaging data. This work proposes the application of recursive feature elimination using a machine learning algorithm based on composite kernels to the classification of healthy controls and patients with schizophrenia. This framework, which evaluates nonlinear relationships between voxels, analyzes whole-brain fMRI data from an auditory task experiment that is segmented into anatomical regions and recursively eliminates the uninformative ones based on their relevance estimates, thus yielding the set of most discriminative brain areas for group classification. The collected data was processed using two analysis methods: the general linear model (GLM) and independent component analysis (ICA). GLM spatial maps as well as ICA temporal lobe and default mode component maps were then input to the classifier. A mean classification accuracy of up to 95% estimated with a leave-two-out cross-validation procedure was achieved by doing multi-source data classification. In addition, it is shown that the classification accuracy rate obtained by using multi-source data surpasses that reached by using single-source data, hence showing that this algorithm takes advantage of the complimentary nature of GLM and ICA. PMID:21723948
Understanding the Influence of Emotions and Reflection upon Multi-Source Feedback Acceptance and Use
ERIC Educational Resources Information Center
Sargeant, Joan; Mann, Karen; Sinclair, Douglas; Van der Vleuten, Cees; Metsemakers, Job
2008-01-01
Introduction: Receiving negative performance feedback can elicit negative emotional reactions which can interfere with feedback acceptance and use. This study investigated emotional responses of family physicians' participating in a multi-source feedback (MSF) program, sources of these emotions, and their influence upon feedback acceptance and…
Naphthoquinone Derivatives Exert Their Antitrypanosomal Activity via a Multi-Target Mechanism
Mazet, Muriel; Perozzo, Remo; Bergamini, Christian; Prati, Federica; Fato, Romana; Lenaz, Giorgio; Capranico, Giovanni; Brun, Reto; Bakker, Barbara M.; Michels, Paul A. M.; Scapozza, Leonardo; Bolognesi, Maria Laura; Cavalli, Andrea
2013-01-01
Background and Methodology Recently, we reported on a new class of naphthoquinone derivatives showing a promising anti-trypanosomatid profile in cell-based experiments. The lead of this series (B6, 2-phenoxy-1,4-naphthoquinone) showed an ED50 of 80 nM against Trypanosoma brucei rhodesiense, and a selectivity index of 74 with respect to mammalian cells. A multitarget profile for this compound is easily conceivable, because quinones, as natural products, serve plants as potent defense chemicals with an intrinsic multifunctional mechanism of action. To disclose such a multitarget profile of B6, we exploited a chemical proteomics approach. Principal Findings A functionalized congener of B6 was immobilized on a solid matrix and used to isolate target proteins from Trypanosoma brucei lysates. Mass analysis delivered two enzymes, i.e. glycosomal glycerol kinase and glycosomal glyceraldehyde-3-phosphate dehydrogenase, as potential molecular targets for B6. Both enzymes were recombinantly expressed and purified, and used for chemical validation. Indeed, B6 was able to inhibit both enzymes with IC50 values in the micromolar range. The multifunctional profile was further characterized in experiments using permeabilized Trypanosoma brucei cells and mitochondrial cell fractions. It turned out that B6 was also able to generate oxygen radicals, a mechanism that may additionally contribute to its observed potent trypanocidal activity. Conclusions and Significance Overall, B6 showed a multitarget mechanism of action, which provides a molecular explanation of its promising anti-trypanosomatid activity. Furthermore, the forward chemical genetics approach here applied may be viable in the molecular characterization of novel multitarget ligands. PMID:23350008
Twitter web-service for soft agent reporting in persistent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2010-04-01
Persistent surveillance is an intricate process requiring monitoring, gathering, processing, tracking, and characterization of many spatiotemporal events occurring concurrently. Data associated with events can be readily attained by networking of hard (physical) sensors. Sensors may have homogeneous or heterogeneous (hybrid) sensing modalities with different communication bandwidth requirements. Complimentary to hard sensors are human observers or "soft sensors" that can report occurrences of evolving events via different communication devices (e.g., texting, cell phones, emails, instant messaging, etc.) to the command control center. However, networking of human observers in ad-hoc way is rather a difficult task. In this paper, we present a Twitter web-service for soft agent reporting in persistent surveillance systems (called Web-STARS). The objective of this web-service is to aggregate multi-source human observations in hybrid sensor networks rapidly. With availability of Twitter social network, such a human networking concept can not only be realized for large scale persistent surveillance systems (PSS), but also, it can be employed with proper interfaces to expedite rapid events reporting by human observers. The proposed technique is particularly suitable for large-scale persistent surveillance systems with distributed soft and hard sensor networks. The efficiency and effectiveness of the proposed technique is measured experimentally by conducting several simulated persistent surveillance scenarios. It is demonstrated that by fusion of information from hard and soft agents improves understanding of common operating picture and enhances situational awareness.
Designing multi-targeted agents: An emerging anticancer drug discovery paradigm.
Fu, Rong-Geng; Sun, Yuan; Sheng, Wen-Bing; Liao, Duan-Fang
2017-08-18
The dominant paradigm in drug discovery is to design ligands with maximum selectivity to act on individual drug targets. With the target-based approach, many new chemical entities have been discovered, developed, and further approved as drugs. However, there are a large number of complex diseases such as cancer that cannot be effectively treated or cured only with one medicine to modulate the biological function of a single target. As simultaneous intervention of two (or multiple) cancer progression relevant targets has shown improved therapeutic efficacy, the innovation of multi-targeted drugs has become a promising and prevailing research topic and numerous multi-targeted anticancer agents are currently at various developmental stages. However, most multi-pharmacophore scaffolds are usually discovered by serendipity or screening, while rational design by combining existing pharmacophore scaffolds remains an enormous challenge. In this review, four types of multi-pharmacophore modes are discussed, and the examples from literature will be used to introduce attractive lead compounds with the capability of simultaneously interfering with different enzyme or signaling pathway of cancer progression, which will reveal the trends and insights to help the design of the next generation multi-targeted anticancer agents. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
A ranking method for the concurrent learning of compounds with various activity profiles.
Dörr, Alexander; Rosenbaum, Lars; Zell, Andreas
2015-01-01
In this study, we present a SVM-based ranking algorithm for the concurrent learning of compounds with different activity profiles and their varying prioritization. To this end, a specific labeling of each compound was elaborated in order to infer virtual screening models against multiple targets. We compared the method with several state-of-the-art SVM classification techniques that are capable of inferring multi-target screening models on three chemical data sets (cytochrome P450s, dehydrogenases, and a trypsin-like protease data set) containing three different biological targets each. The experiments show that ranking-based algorithms show an increased performance for single- and multi-target virtual screening. Moreover, compounds that do not completely fulfill the desired activity profile are still ranked higher than decoys or compounds with an entirely undesired profile, compared to other multi-target SVM methods. SVM-based ranking methods constitute a valuable approach for virtual screening in multi-target drug design. The utilization of such methods is most helpful when dealing with compounds with various activity profiles and the finding of many ligands with an already perfectly matching activity profile is not to be expected.
A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.
Zhao, Lei; Mi, Dong; Sun, Yeqing
2017-05-07
The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
An integrated multi-source energy harvester based on vibration and magnetic field energy
NASA Astrophysics Data System (ADS)
Hu, Zhengwen; Qiu, Jing; Wang, Xian; Gao, Yuan; Liu, Xin; Chang, Qijie; Long, Yibing; He, Xingduo
2018-05-01
In this paper, an integrated multi-source energy harvester (IMSEH) employing a special shaped cantilever beam and a piezoelectric transducer to convert vibration and magnetic field energy into electrical energy is presented. The electric output performance of the proposed IMSEH has been investigated. Compared to a traditional multi-source energy harvester (MSEH) or single source energy harvester (SSEH), the proposed IMSEH can simultaneously harvest vibration and magnetic field energy with an integrated structure and the electric output is greatly improved. When other conditions keep identical, the IMSEH can obtain high voltage of 12.8V. Remarkably, the proposed IMSEHs have great potential for its application in wireless sensor network.
Harth, Yoram
2015-03-01
In the last decade, Radiofrequency (RF) energy has proven to be safe and highly efficacious for face and neck skin tightening, body contouring, and cellulite reduction. In contrast to first-generation Monopolar/Bipolar and "X -Polar" RF systems which use one RF generator connected to one or more skin electrodes, multisource radiofrequency devices use six independent RF generators allowing efficient dermal heating to 52-55°C, with no pain or risk of other side effects. In this review, the basic science and clinical results of body contouring and cellulite treatment using multisource radiofrequency system (Endymed PRO, Endymed, Cesarea, Israel) will be discussed and analyzed. © 2015 Wiley Periodicals, Inc.
Evaluation of the Jonker-Volgenant-Castanon (JVC) assignment algorithm for track association
NASA Astrophysics Data System (ADS)
Malkoff, Donald B.
1997-07-01
The Jonker-Volgenant-Castanon (JVC) assignment algorithm was used by Lockheed Martin Advanced Technology Laboratories (ATL) for track association in the Rotorcraft Pilot's Associate (RPA) program. RPA is Army Aviation's largest science and technology program, involving an integrated hardware/software system approach for a next generation helicopter containing advanced sensor equipments and applying artificial intelligence `associate' technologies. ATL is responsible for the multisensor, multitarget, onboard/offboard track fusion. McDonnell Douglas Helicopter Systems is the prime contractor and Lockheed Martin Federal Systems is responsible for developing much of the cognitive decision aiding and controls-and-displays subsystems. RPA is scheduled for flight testing beginning in 1997. RPA is unique in requiring real-time tracking and fusion for large numbers of highly-maneuverable ground (and air) targets in a target-dense environment. It uses diverse sensors and is concerned with a large area of interest. Target class and identification data is tightly integrated with spatial and kinematic data throughout the processing. Because of platform constraints, processing hardware for track fusion was quite limited. No previous experience using JVC in this type environment had been reported. ATL performed extensive testing of the JVC, concentrating on error rates and run- times under a variety of conditions. These included wide ranging numbers and types of targets, sensor uncertainties, target attributes, differing degrees of target maneuverability, and diverse combinations of sensors. Testing utilized Monte Carlo approaches, as well as many kinds of challenging scenarios. Comparisons were made with a nearest-neighbor algorithm and a new, proprietary algorithm (the `Competition' algorithm). The JVC proved to be an excellent choice for the RPA environment, providing a good balance between speed of operation and accuracy of results.
Multitarget-multisensor management for decentralized sensor networks
NASA Astrophysics Data System (ADS)
Tharmarasa, R.; Kirubarajan, T.; Sinha, A.; Hernandez, M. L.
2006-05-01
In this paper, we consider the problem of sensor resource management in decentralized tracking systems. Due to the availability of cheap sensors, it is possible to use a large number of sensors and a few fusion centers (FCs) to monitor a large surveillance region. Even though a large number of sensors are available, due to frequency, power and other physical limitations, only a few of them can be active at any one time. The problem is then to select sensor subsets that should be used by each FC at each sampling time in order to optimize the tracking performance subject to their operational constraints. In a recent paper, we proposed an algorithm to handle the above issues for joint detection and tracking, without using simplistic clustering techniques that are standard in the literature. However, in that paper, a hierarchical architecture with feedback at every sampling time was considered, and the sensor management was performed only at a central fusion center (CFC). However, in general, it is not possible to communicate with the CFC at every sampling time, and in many cases there may not even be a CFC. Sometimes, communication between CFC and local fusion centers might fail as well. Therefore performing sensor management only at the CFC is not viable in most networks. In this paper, we consider an architecture in which there is no CFC, each FC communicates only with the neighboring FCs, and communications are restricted. In this case, each FC has to decide which sensors are to be used by itself at each measurement time step. We propose an efficient algorithm to handle the above problem in real time. Simulation results illustrating the performance of the proposed algorithm are also presented.
Development of Physical Therapy Practical Assessment System by Using Multisource Feedback
ERIC Educational Resources Information Center
Hengsomboon, Ninwisan; Pasiphol, Shotiga; Sujiva, Siridej
2017-01-01
The purposes of the research were (1) to develop the physical therapy practical assessment system by using the multisource feedback (MSF) approach and (2) to investigate the effectiveness of the implementation of the developed physical therapy practical assessment system. The development of physical therapy practical assessment system by using MSF…
ERIC Educational Resources Information Center
Roberts, Martin J.; Campbell, John L.; Richards, Suzanne H.; Wright, Christine
2013-01-01
Introduction: Multisource feedback (MSF) ratings provided by patients and colleagues are often poorly correlated with doctors' self-assessments. Doctors' reactions to feedback depend on its agreement with their own perceptions, but factors influencing self-other agreement in doctors' MSF ratings have received little attention. We aimed to identify…
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
NASA Astrophysics Data System (ADS)
Xie, Jiayu; Wang, Gongwen; Sha, Yazhou; Liu, Jiajun; Wen, Botao; Nie, Ming; Zhang, Shuai
2017-04-01
Integrating multi-source geoscience information (such as geology, geophysics, geochemistry, and remote sensing) using GIS mapping is one of the key topics and frontiers in quantitative geosciences for mineral exploration. GIS prospective mapping and three-dimensional (3D) modeling can be used not only to extract exploration criteria and delineate metallogenetic targets but also to provide important information for the quantitative assessment of mineral resources. This paper uses the Shangnan district of Shaanxi province (China) as a case study area. GIS mapping and potential granite-hydrothermal uranium targeting were conducted in the study area combining weights of evidence (WofE) and concentration-area (C-A) fractal methods with multi-source geoscience information. 3D deposit-scale modeling using GOCAD software was performed to validate the shapes and features of the potential targets at the subsurface. The research results show that: (1) the known deposits have potential zones at depth, and the 3D geological models can delineate surface or subsurface ore-forming features, which can be used to analyze the uncertainty of the shape and feature of prospectivity mapping at the subsurface; (2) single geochemistry anomalies or remote sensing anomalies at the surface require combining the depth exploration criteria of geophysics to identify potential targets; and (3) the single or sparse exploration criteria zone with few mineralization spots at the surface has high uncertainty in terms of the exploration target.
Kalash, Leen; Val, Cristina; Azuaje, Jhonny; Loza, María I; Svensson, Fredrik; Zoufir, Azedine; Mervin, Lewis; Ladds, Graham; Brea, José; Glen, Robert; Sotelo, Eddy; Bender, Andreas
2017-12-30
Compounds designed to display polypharmacology may have utility in treating complex diseases, where activity at multiple targets is required to produce a clinical effect. In particular, suitable compounds may be useful in treating neurodegenerative diseases by promoting neuronal survival in a synergistic manner via their multi-target activity at the adenosine A 1 and A 2A receptors (A 1 R and A 2A R) and phosphodiesterase 10A (PDE10A), which modulate intracellular cAMP levels. Hence, in this work we describe a computational method for the design of synthetically feasible ligands that bind to A 1 and A 2A receptors and inhibit phosphodiesterase 10A (PDE10A), involving a retrosynthetic approach employing in silico target prediction and docking, which may be generally applicable to multi-target compound design at several target classes. This approach has identified 2-aminopyridine-3-carbonitriles as the first multi-target ligands at A 1 R, A 2A R and PDE10A, by showing agreement between the ligand and structure based predictions at these targets. The series were synthesized via an efficient one-pot scheme and validated pharmacologically as A 1 R/A 2A R-PDE10A ligands, with IC 50 values of 2.4-10.0 μM at PDE10A and K i values of 34-294 nM at A 1 R and/or A 2A R. Furthermore, selectivity profiling of the synthesized 2-amino-pyridin-3-carbonitriles against other subtypes of both protein families showed that the multi-target ligand 8 exhibited a minimum of twofold selectivity over all tested off-targets. In addition, both compounds 8 and 16 exhibited the desired multi-target profile, which could be considered for further functional efficacy assessment, analog modification for the improvement of selectivity towards A 1 R, A 2A R and PDE10A collectively, and evaluation of their potential synergy in modulating cAMP levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ge, Y; Keall, P; Poulsen, P
Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable PTV margin reduction to minimize health tissue toxicity while remain tumor coverage when treating advanced disease with independently moving targets involved. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less
Jouhet, V; Defossez, G; Ingrand, P
2013-01-01
The aim of this study was to develop and evaluate a selection algorithm of relevant records for the notification of incident cases of cancer on the basis of the individual data available in a multi-source information system. This work was conducted on data for the year 2008 in the general cancer registry of Poitou-Charentes region (France). The selection algorithm hierarchizes information according to its level of relevance for tumoral topography and tumoral morphology independently. The selected data are combined to form composite records. These records are then grouped in respect with the notification rules of the International Agency for Research on Cancer for multiple primary cancers. The evaluation, based on recall, precision and F-measure confronted cases validated manually by the registry's physicians with tumours notified with and without records selection. The analysis involved 12,346 tumours validated among 11,971 individuals. The data used were hospital discharge data (104,474 records), pathology data (21,851 records), healthcare insurance data (7508 records) and cancer care centre's data (686 records). The selection algorithm permitted performances improvement for notification of tumour topography (F-measure 0.926 with vs. 0.857 without selection) and tumour morphology (F-measure 0.805 with vs. 0.750 without selection). These results show that selection of information according to its origin is efficient in reducing noise generated by imprecise coding. Further research is needed for solving the semantic problems relating to the integration of heterogeneous data and the use of non-structured information.
Information transfer in auditoria and room-acoustical quality.
Summers, Jason E
2013-04-01
It is hypothesized that room-acoustical quality correlates with the information-transfer rate. Auditoria are considered as multiple-input multiple-output communication channels and a theory of information-transfer is outlined that accounts for time-variant multipath, spatial hearing, and distributed directional sources. Source diversity and spatial hearing are shown to be the mechanisms through which multipath increases the information-transfer rate by overcoming finite spatial resolution. In addition to predictions that are confirmed by recent and historical findings, the theory provides explanations for the influence of factors such as musical repertoire and ensemble size on subjective preference and the influence of multisource, multichannel auralization on perceived realism.
Spatial characterization of the meltwater field from icebergs in the Weddell Sea.
Helly, John J; Kaufmann, Ronald S; Vernet, Maria; Stephenson, Gordon R
2011-04-05
We describe the results from a spatial cyberinfrastructure developed to characterize the meltwater field around individual icebergs and integrate the results with regional- and global-scale data. During the course of the cyberinfrastructure development, it became clear that we were also building an integrated sampling planning capability across multidisciplinary teams that provided greater agility in allocating expedition resources resulting in new scientific insights. The cyberinfrastructure-enabled method is a complement to the conventional methods of hydrographic sampling in which the ship provides a static platform on a station-by-station basis. We adapted a sea-floor mapping method to more rapidly characterize the sea surface geophysically and biologically. By jointly analyzing the multisource, continuously sampled biological, chemical, and physical parameters, using Global Positioning System time as the data fusion key, this surface-mapping method enables us to examine the relationship between the meltwater field of the iceberg to the larger-scale marine ecosystem of the Southern Ocean. Through geospatial data fusion, we are able to combine very fine-scale maps of dynamic processes with more synoptic but lower-resolution data from satellite systems. Our results illustrate the importance of spatial cyberinfrastructure in the overall scientific enterprise and identify key interfaces and sources of error that require improved controls for the development of future Earth observing systems as we move into an era of peta- and exascale, data-intensive computing.
Spatial characterization of the meltwater field from icebergs in the Weddell Sea
Helly, John J.; Kaufmann, Ronald S.; Vernet, Maria; Stephenson, Gordon R.
2011-01-01
We describe the results from a spatial cyberinfrastructure developed to characterize the meltwater field around individual icebergs and integrate the results with regional- and global-scale data. During the course of the cyberinfrastructure development, it became clear that we were also building an integrated sampling planning capability across multidisciplinary teams that provided greater agility in allocating expedition resources resulting in new scientific insights. The cyberinfrastructure-enabled method is a complement to the conventional methods of hydrographic sampling in which the ship provides a static platform on a station-by-station basis. We adapted a sea-floor mapping method to more rapidly characterize the sea surface geophysically and biologically. By jointly analyzing the multisource, continuously sampled biological, chemical, and physical parameters, using Global Positioning System time as the data fusion key, this surface-mapping method enables us to examine the relationship between the meltwater field of the iceberg to the larger-scale marine ecosystem of the Southern Ocean. Through geospatial data fusion, we are able to combine very fine-scale maps of dynamic processes with more synoptic but lower-resolution data from satellite systems. Our results illustrate the importance of spatial cyberinfrastructure in the overall scientific enterprise and identify key interfaces and sources of error that require improved controls for the development of future Earth observing systems as we move into an era of peta- and exascale, data-intensive computing. PMID:21444769
Geometric Factors in Target Positioning and Tracking
2009-07-01
Shalom and X.R. Li, Multitarget-Multisensor Tracking: Principles and Techniques, YBS Publishing, Storrs, CT, 1995. [2] S. Blackman and R. Popoli, Design...Multitarget-Multisensor Tracking: Applications and Advances, Vol.2, Y. Bar- Shalom (Ed.), 325-392, Artech House, Norwood, MA, 1999. [10] B. Ristic...R. Yarlagadda, I. Ali , N. Al-Dhahir, and J. Hershey, “GPS GDOP Metric,” IEE Proc. Radar, Sonar Navig, 147(5), Oct. 2000. [14] A. Kelly
Multi-Target Regression via Robust Low-Rank Learning.
Zhen, Xiantong; Yu, Mengyang; He, Xiaofei; Li, Shuo
2018-02-01
Multi-target regression has recently regained great popularity due to its capability of simultaneously learning multiple relevant regression tasks and its wide applications in data mining, computer vision and medical image analysis, while great challenges arise from jointly handling inter-target correlations and input-output relationships. In this paper, we propose Multi-layer Multi-target Regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general framework via robust low-rank learning. Specifically, the MMR can explicitly encode inter-target correlations in a structure matrix by matrix elastic nets (MEN); the MMR can work in conjunction with the kernel trick to effectively disentangle highly complex nonlinear input-output relationships; the MMR can be efficiently solved by a new alternating optimization algorithm with guaranteed convergence. The MMR leverages the strength of kernel methods for nonlinear feature learning and the structural advantage of multi-layer learning architectures for inter-target correlation modeling. More importantly, it offers a new multi-layer learning paradigm for multi-target regression which is endowed with high generality, flexibility and expressive ability. Extensive experimental evaluation on 18 diverse real-world datasets demonstrates that our MMR can achieve consistently high performance and outperforms representative state-of-the-art algorithms, which shows its great effectiveness and generality for multivariate prediction.
A Multisource Approach to Assessing Child Maltreatment From Records, Caregivers, and Children.
Sierau, Susan; Brand, Tilman; Manly, Jody Todd; Schlesier-Michel, Andrea; Klein, Annette M; Andreas, Anna; Garzón, Leonhard Quintero; Keil, Jan; Binser, Martin J; von Klitzing, Kai; White, Lars O
2017-02-01
Practitioners and researchers alike face the challenge that different sources report inconsistent information regarding child maltreatment. The present study capitalizes on concordance and discordance between different sources and probes applicability of a multisource approach to data from three perspectives on maltreatment-Child Protection Services (CPS) records, caregivers, and children. The sample comprised 686 participants in early childhood (3- to 8-year-olds; n = 275) or late childhood/adolescence (9- to 16-year-olds; n = 411), 161 from two CPS sites and 525 from the community oversampled for psychosocial risk. We established three components within a factor-analytic approach: the shared variance between sources on presence of maltreatment (convergence), nonshared variance resulting from the child's own perspective, and the caregiver versus CPS perspective. The shared variance between sources was the strongest predictor of caregiver- and self-reported child symptoms. Child perspective and caregiver versus CPS perspective mainly added predictive strength of symptoms in late childhood/adolescence over and above convergence in the case of emotional maltreatment, lack of supervision, and physical abuse. By contrast, convergence almost fully accounted for child symptoms for failure to provide. Our results suggest consistent information from different sources reporting on maltreatment is, on average, the best indicator of child risk.
Pleiades image quality: from users' needs to products definition
NASA Astrophysics Data System (ADS)
Kubik, Philippe; Pascal, Véronique; Latry, Christophe; Baillarin, Simon
2005-10-01
Pleiades is the highest resolution civilian earth observing system ever developed in Europe. This imagery programme is conducted by the French National Space Agency, CNES. It will operate in 2008-2009 two agile satellites designed to provide optical images to civilian and defence users. Images will be simultaneously acquired in Panchromatic (PA) and multispectral (XS) mode, which allows, in Nadir acquisition condition, to deliver 20 km wide, false or natural colored scenes with a 70 cm ground sampling distance after PA+XS fusion. Imaging capabilities have been highly optimized in order to acquire along-track mosaics, stereo pairs and triplets, and multi-targets. To fulfill the operational requirements and ensure quick access to information, ground processing has to automatically perform the radiometrical and geometrical corrections. Since ground processing capabilities have been taken into account very early in the programme development, it has been possible to relax some costly on-board components requirements, in order to achieve a cost effective on-board/ground compromise. Starting from an overview of the system characteristics, this paper deals with the image products definition (raw level, perfect sensor, orthoimage and along-track orthomosaics), and the main processing steps. It shows how each system performance is a result of the satellite performance followed by an appropriate ground processing. Finally, it focuses on the radiometrical performances of final products which are intimately linked to the following processing steps : radiometrical corrections, PA restoration, image resampling and PAN-sharpening.
Technologies for Army Knowledge Fusion
2004-09-01
interpret it in context and understand the implications (Alberts et al., 2002). Note that the knowledge / information fusion issue arises immediately here...Army Knowledge Fusion Richard Scherl Department of Computer Science Monmouth University Dana L. Ulery Computational and Information Sciences...civilian and military sources. Knowledge fusion, also called information fusion and multisensor data fusion, names the body of techniques needed to
Parallel consensual neural networks.
Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H
1997-01-01
A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.
1998-05-22
NUMBER PR-98-1 T. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) Office of Naval Research Ballston Center Tower One One North Quincy...unlimited. 12 b. DISTRIBUTION CODE 19980601 082 13. ABSTRACT (Maximum 200 words) This research project is concerned with two distinct aspects of analysis...Environments With Application To Multitarget Tracking This research project is concerned with two distinct aspects of analysis and processing of sig
Development of a multitarget tracking system for paramecia
NASA Astrophysics Data System (ADS)
Yeh, Yu-Sing; Huang, Ke-Nung; Jen, Sun-Lon; Li, Yan-Chay; Young, Ming-Shing
2010-07-01
This investigation develops a multitarget tracking system for the motile protozoa, paramecium. The system can recognize, track, and record the orbit of swimming paramecia within a 4 mm diameter of a circular experimental pool. The proposed system is implemented using an optical microscope, a charge-coupled device camera, and a software tool, Laboratory Virtual Instrumentation Engineering Workbench (LABVIEW). An algorithm for processing the images and analyzing the traces of the paramecia is developed in LABVIEW. It focuses on extracting meaningful data in an experiment and recording them to elucidate the behavior of paramecia. The algorithm can also continue to track paramecia even if they are transposed or collide with each other. The experiment demonstrates that this multitarget tracking design can really track more than five paramecia and simultaneously yield meaningful data from the moving paramecia at a maximum speed of 1.7 mm/s.
[Application of network biology on study of traditional Chinese medicine].
Tian, Sai-Sai; Yang, Jian; Zhao, Jing; Zhang, Wei-Dong
2018-01-01
With the completion of the human genome project, people have gradually recognized that the functions of the biological system are fulfilled through network-type interaction between genes, proteins and small molecules, while complex diseases are caused by the imbalance of biological processes due to a number of gene expression disorders. These have contributed to the rise of the concept of the "multi-target" drug discovery. Treatment and diagnosis of traditional Chinese medicine are based on holism and syndrome differentiation. At the molecular level, traditional Chinese medicine is characterized by multi-component and multi-target prescriptions, which is expected to provide a reference for the development of multi-target drugs. This paper reviews the application of network biology in traditional Chinese medicine in six aspects, in expectation to provide a reference to the modernized study of traditional Chinese medicine. Copyright© by the Chinese Pharmaceutical Association.
A Noncontact FMCW Radar Sensor for Displacement Measurement in Structural Health Monitoring
Li, Cunlong; Chen, Weimin; Liu, Gang; Yan, Rong; Xu, Hengyi; Qi, Yi
2015-01-01
This paper investigates the Frequency Modulation Continuous Wave (FMCW) radar sensor for multi-target displacement measurement in Structural Health Monitoring (SHM). The principle of three-dimensional (3-D) displacement measurement of civil infrastructures is analyzed. The requirements of high-accuracy displacement and multi-target identification for the measuring sensors are discussed. The fundamental measuring principle of FMCW radar is presented with rigorous mathematical formulas, and further the multiple-target displacement measurement is analyzed and simulated. In addition, a FMCW radar prototype is designed and fabricated based on an off-the-shelf radar frontend and data acquisition (DAQ) card, and the displacement error induced by phase asynchronism is analyzed. The conducted outdoor experiments verify the feasibility of this sensing method applied to multi-target displacement measurement, and experimental results show that three targets located at different distances can be distinguished simultaneously with millimeter level accuracy. PMID:25822139
A noncontact FMCW radar sensor for displacement measurement in structural health monitoring.
Li, Cunlong; Chen, Weimin; Liu, Gang; Yan, Rong; Xu, Hengyi; Qi, Yi
2015-03-26
This paper investigates the Frequency Modulation Continuous Wave (FMCW) radar sensor for multi-target displacement measurement in Structural Health Monitoring (SHM). The principle of three-dimensional (3-D) displacement measurement of civil infrastructures is analyzed. The requirements of high-accuracy displacement and multi-target identification for the measuring sensors are discussed. The fundamental measuring principle of FMCW radar is presented with rigorous mathematical formulas, and further the multiple-target displacement measurement is analyzed and simulated. In addition, a FMCW radar prototype is designed and fabricated based on an off-the-shelf radar frontend and data acquisition (DAQ) card, and the displacement error induced by phase asynchronism is analyzed. The conducted outdoor experiments verify the feasibility of this sensing method applied to multi-target displacement measurement, and experimental results show that three targets located at different distances can be distinguished simultaneously with millimeter level accuracy.
Simoni, Elena; Bartolini, Manuela; Abu, Izuddin F; Blockley, Alix; Gotti, Cecilia; Bottegoni, Giovanni; Caporaso, Roberta; Bergamini, Christian; Andrisano, Vincenza; Cavalli, Andrea; Mellor, Ian R; Minarini, Anna; Rosini, Michela
2017-06-01
Alzheimer pathogenesis has been associated with a network of processes working simultaneously and synergistically. Over time, much interest has been focused on cholinergic transmission and its mutual interconnections with other active players of the disease. Besides the cholinesterase mainstay, the multifaceted interplay between nicotinic receptors and amyloid is actually considered to have a central role in neuroprotection. Thus, the multitarget drug-design strategy has emerged as a chance to face the disease network. By exploiting the multitarget approach, hybrid compounds have been synthesized and studied in vitro and in silico toward selected targets of the cholinergic and amyloidogenic pathways. The new molecules were able to target the cholinergic system, by joining direct nicotinic receptor stimulation to acetylcholinesterase inhibition, and to inhibit amyloid-β aggregation. The compounds emerged as a suitable starting point for a further optimization process.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino.
Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-01-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
NASA Astrophysics Data System (ADS)
Barlacchi, Gianni; de Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-10-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.
Multisource energy system project
NASA Astrophysics Data System (ADS)
Dawson, R. W.; Cowan, R. A.
1987-03-01
The mission of this project is to investigate methods of providing uninterruptible power to Army communications and navigational facilities, many of which have limited access or are located in rugged terrain. Two alternatives are currently available for deploying terrestrial stand-alone power systems: (1) conventional electric systems powered by diesel fuel, propane, or natural gas, and (2) alternative power systems using renewable energy sources such as solar photovoltaics (PV) or wind turbines (WT). The increased cost of fuels for conventional systems and the high cost of energy storage for single-source renewable energy systems have created interest in the hybrid or multisource energy system. This report will provide a summary of the first and second interim reports, final test results, and a user's guide for software that will assist in applying and designing multi-source energy systems.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-01-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others. PMID:26528394
Husain, Syed S; Kalinin, Alexandr; Truong, Anh; Dinov, Ivo D
Intuitive formulation of informative and computationally-efficient queries on big and complex datasets present a number of challenges. As data collection is increasingly streamlined and ubiquitous, data exploration, discovery and analytics get considerably harder. Exploratory querying of heterogeneous and multi-source information is both difficult and necessary to advance our knowledge about the world around us. We developed a mechanism to integrate dispersed multi-source data and service the mashed information via human and machine interfaces in a secure, scalable manner. This process facilitates the exploration of subtle associations between variables, population strata, or clusters of data elements, which may be opaque to standard independent inspection of the individual sources. This a new platform includes a device agnostic tool (Dashboard webapp, http://socr.umich.edu/HTML5/Dashboard/) for graphical querying, navigating and exploring the multivariate associations in complex heterogeneous datasets. The paper illustrates this core functionality and serviceoriented infrastructure using healthcare data (e.g., US data from the 2010 Census, Demographic and Economic surveys, Bureau of Labor Statistics, and Center for Medicare Services) as well as Parkinson's Disease neuroimaging data. Both the back-end data archive and the front-end dashboard interfaces are continuously expanded to include additional data elements and new ways to customize the human and machine interactions. A client-side data import utility allows for easy and intuitive integration of user-supplied datasets. This completely open-science framework may be used for exploratory analytics, confirmatory analyses, meta-analyses, and education and training purposes in a wide variety of fields.
Network-based drug discovery by integrating systems biology and computational technologies
Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua
2013-01-01
Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-01-01
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-12-13
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.
Senay, Gabriel B.; Velpuri, Naga Manohar; Alemu, Henok; Pervez, Shahriar Md; Asante, Kwabena O; Karuki, Gatarwa; Taa, Asefa; Angerer, Jay
2013-01-01
Timely information on the availability of water and forage is important for the sustainable development of pastoral regions. The lack of such information increases the dependence of pastoral communities on perennial sources, which often leads to competition and conflicts. The provision of timely information is a challenging task, especially due to the scarcity or non-existence of conventional station-based hydrometeorological networks in the remote pastoral regions. A multi-source water balance modelling approach driven by satellite data was used to operationally monitor daily water level fluctuations across the pastoral regions of northern Kenya and southern Ethiopia. Advanced Spaceborne Thermal Emission and Reflection Radiometer data were used for mapping and estimating the surface area of the waterholes. Satellite-based rainfall, modelled run-off and evapotranspiration data were used to model daily water level fluctuations. Mapping of waterholes was achieved with 97% accuracy. Validation of modelled water levels with field-installed gauge data demonstrated the ability of the model to capture the seasonal patterns and variations. Validation results indicate that the model explained 60% of the observed variability in water levels, with an average root-mean-squared error of 22%. Up-to-date information on rainfall, evaporation, scaled water depth and condition of the waterholes is made available daily in near-real time via the Internet (http://watermon.tamu.edu). Such information can be used by non-governmental organizations, governmental organizations and other stakeholders for early warning and decision making. This study demonstrated an integrated approach for establishing an operational waterhole monitoring system using multi-source satellite data and hydrologic modelling.
Crossley, James G M
2015-01-01
Nurse appraisal is well established in the Western world because of its obvious educational advantages. Appraisal works best with many sources of information on performance. Multisource feedback (MSF) is widely used in business and in other clinical disciplines to provide such information. It has also been incorporated into nursing appraisals, but, so far, none of the instruments in use for nurses has been validated. We set out to develop an instrument aligned with the UK Knowledge and Skills Framework (KSF) and to evaluate its reliability and feasibility across a wide hospital-based nursing population. The KSF framework provided a content template. Focus groups developed an instrument based on consensus. The instrument was administered to all the nursing staff in 2 large NHS hospitals forming a single trust in London, England. We used generalizability analysis to estimate reliability, response rates and unstructured interviews to evaluate feasibility, and factor structure and correlation studies to evaluate validity. On a voluntary basis the response rate was moderate (60%). A failure to engage with information technology and employment-related concerns were commonly cited as reasons for not responding. In this population, 11 responses provided a profile with sufficient reliability to inform appraisal (G = 0.7). Performance on the instrument was closely and significantly correlated with performance on a KSF questionnaire. This is the first contemporary psychometric evaluation of an MSF instrument for nurses. MSF appears to be as valid and reliable as an assessment method to inform appraisal in nurses as it is in other health professional groups. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Lai, Michelle Mei Yee; Roberts, Noel; Martin, Jenepher
2014-09-17
Oral feedback from clinical educators is the traditional teaching method for improving clinical consultation skills in medical students. New approaches are needed to enhance this teaching model. Multisource feedback is a commonly used assessment method for learning among practising clinicians, but this assessment has not been explored rigorously in medical student education. This study seeks to evaluate if additional feedback on patient satisfaction improves medical student performance. The Patient Teaching Associate (PTA) Feedback Study is a single site randomized controlled, double-blinded trial with two parallel groups.An after-hours general practitioner clinic in Victoria, Australia, is adapted as a teaching clinic during the day. Medical students from two universities in their first clinical year participate in six simulated clinical consultations with ambulatory patient volunteers living with chronic illness. Eligible students will be randomized in equal proportions to receive patient satisfaction score feedback with the usual multisource feedback and the usual multisource feedback alone as control. Block randomization will be performed. We will assess patient satisfaction and consultation performance outcomes at baseline and after one semester and will compare any change in mean scores at the last session from that at baseline. We will model data using regression analysis to determine any differences between intervention and control groups. Full ethical approval has been obtained for the study. This trial will comply with CONSORT guidelines and we will disseminate data at conferences and in peer-reviewed journals. This is the first proposed trial to determine whether consumer feedback enhances the use of multisource feedback in medical student education, and to assess the value of multisource feedback in teaching and learning about the management of ambulatory patients living with chronic conditions. Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12613001055796.
Multisource feedback analysis of pediatric outpatient teaching
2013-01-01
Background This study aims to evaluate the outpatient communication skills of medical students via multisource feedback, which may be useful to map future directions in improving physician-patient communication. Methods Family respondents of patients, a nurse, a clinical teacher, and a research assistant evaluated video-recorded medical students’ interactions with outpatients by using multisource feedback questionnaires; students also assessed their own skills. The questionnaire was answered based on the video-recorded interactions between outpatients and the medical students. Results A total of 60 family respondents of the 60 patients completed the questionnaires, 58 (96.7%) of them agreed with the video recording. Two reasons for reluctance were “personal privacy” issues and “simply disagree” with the video recording. The average satisfaction score of the 58 students was 85.1 points, indicating students’ performance was in the category between satisfied and very satisfied. The family respondents were most satisfied with the “teacher”s attitude,“ followed by ”teaching quality”. In contrast, the family respondents were least satisfied with “being open to questions”. Among the 6 assessment domains of communication skills, the students scored highest on “explaining” and lowest on “giving recommendations”. In the detailed assessment by family respondents, the students scored lowest on “asking about life/school burden”. In the multisource analysis, the nurses’ mean score was much higher and the students’ mean self-assessment score was lower than the average scores on all domains. Conclusion The willingness and satisfaction of family respondents were high in this study. Students scored the lowest on giving recommendations to patients. Multisource feedback with video recording is useful in providing more accurate evaluation of students’ communication competence and in identifying the areas of communication that require enhancement. PMID:24180615
Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data.
Nielsen, Allan Aasbjerg
2002-01-01
This paper describes two- and multiset canonical correlations analysis (CCA) for data fusion, multisource, multiset, or multitemporal exploratory data analysis. These techniques transform multivariate multiset data into new orthogonal variables called canonical variates (CVs) which, when applied in remote sensing, exhibit ever-decreasing similarity (as expressed by correlation measures) over sets consisting of 1) spectral variables at fixed points in time (R-mode analysis), or 2) temporal variables with fixed wavelengths (T-mode analysis). The CVs are invariant to linear and affine transformations of the original variables within sets which means, for example, that the R-mode CVs are insensitive to changes over time in offset and gain in a measuring device. In a case study, CVs are calculated from Landsat Thematic Mapper (TM) data with six spectral bands over six consecutive years. Both Rand T-mode CVs clearly exhibit the desired characteristic: they show maximum similarity for the low-order canonical variates and minimum similarity for the high-order canonical variates. These characteristics are seen both visually and in objective measures. The results from the multiset CCA R- and T-mode analyses are very different. This difference is ascribed to the noise structure in the data. The CCA methods are related to partial least squares (PLS) methods. This paper very briefly describes multiset CCA-based multiset PLS. Also, the CCA methods can be applied as multivariate extensions to empirical orthogonal functions (EOF) techniques. Multiset CCA is well-suited for inclusion in geographical information systems (GIS).
Ma, X H; Wang, R; Tan, C Y; Jiang, Y Y; Lu, T; Rao, H B; Li, X Y; Go, M L; Low, B C; Chen, Y Z
2010-10-04
Multitarget agents have been increasingly explored for enhancing efficacy and reducing countertarget activities and toxicities. Efficient virtual screening (VS) tools for searching selective multitarget agents are desired. Combinatorial support vector machines (C-SVM) were tested as VS tools for searching dual-inhibitors of 11 combinations of 9 anticancer kinase targets (EGFR, VEGFR, PDGFR, Src, FGFR, Lck, CDK1, CDK2, GSK3). C-SVM trained on 233-1,316 non-dual-inhibitors correctly identified 26.8%-57.3% (majority >36%) of the 56-230 intra-kinase-group dual-inhibitors (equivalent to the 50-70% yields of two independent individual target VS tools), and 12.2% of the 41 inter-kinase-group dual-inhibitors. C-SVM were fairly selective in misidentifying as dual-inhibitors 3.7%-48.1% (majority <20%) of the 233-1,316 non-dual-inhibitors of the same kinase pairs and 0.98%-4.77% of the 3,971-5,180 inhibitors of other kinases. C-SVM produced low false-hit rates in misidentifying as dual-inhibitors 1,746-4,817 (0.013%-0.036%) of the 13.56 M PubChem compounds, 12-175 (0.007%-0.104%) of the 168 K MDDR compounds, and 0-84 (0.0%-2.9%) of the 19,495-38,483 MDDR compounds similar to the known dual-inhibitors. C-SVM was compared to other VS methods Surflex-Dock, DOCK Blaster, kNN and PNN against the same sets of kinase inhibitors and the full set or subset of the 1.02 M Zinc clean-leads data set. C-SVM produced comparable dual-inhibitor yields, slightly better false-hit rates for kinase inhibitors, and significantly lower false-hit rates for the Zinc clean-leads data set. Combinatorial SVM showed promising potential for searching selective multitarget agents against intra-kinase-group kinases without explicit knowledge of multitarget agents.
ERIC Educational Resources Information Center
Burns, G. Leonard; Desmul, Chris; Walsh, James A.; Silpakit, Chatchawan; Ussahawanitchakit, Phapruke
2009-01-01
Confirmatory factor analysis was used with a multitrait (attention-deficit/hyperactivity disorder-inattention, attention-deficit/hyperactivity disorder-hyperactivity/impulsivity, oppositional defiant disorder toward adults, academic competence, and social competence) by multisource (mothers and fathers) matrix to test the invariance and…
Ng, Kok-Yee; Koh, Christine; Ang, Soon; Kennedy, Jeffrey C; Chan, Kim-Yin
2011-09-01
This study extends multisource feedback research by assessing the effects of rater source and raters' cultural value orientations on rating bias (leniency and halo). Using a motivational perspective of performance appraisal, the authors posit that subordinate raters followed by peers will exhibit more rating bias than superiors. More important, given that multisource feedback systems were premised on low power distance and individualistic cultural assumptions, the authors expect raters' power distance and individualism-collectivism orientations to moderate the effects of rater source on rating bias. Hierarchical linear modeling on data collected from 1,447 superiors, peers, and subordinates who provided developmental feedback to 172 military officers show that (a) subordinates exhibit the most rating leniency, followed by peers and superiors; (b) subordinates demonstrate more halo than superiors and peers, whereas superiors and peers do not differ; (c) the effects of power distance on leniency and halo are strongest for subordinates than for peers and superiors; (d) the effects of collectivism on leniency were stronger for subordinates and peers than for superiors; effects on halo were stronger for subordinates than superiors, but these effects did not differ for subordinates and peers. The present findings highlight the role of raters' cultural values in multisource feedback ratings. PsycINFO Database Record (c) 2011 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Han, P.; Long, D.
2017-12-01
Snow water equivalent (SWE) and total water storage (TWS) changes are important hydrological state variables over cryospheric regions, such as China's Upper Yangtze River (UYR) basin. Accurate simulation of these two state variables plays a critical role in understanding hydrological processes over this region and, in turn, benefits water resource management, hydropower development, and ecological integrity over the lower reaches of the Yangtze River, one of the largest rivers globally. In this study, an improved CREST model coupled with a snow and glacier melting module was used to simulate SWE and TWS changes over the UYR, and to quantify contributions of snow and glacier meltwater to the total runoff. Forcing, calibration, and validation data are mainly from multi-source remote sensing observations, including satellite-based precipitation estimates, passive microwave remote sensing-based SWE, and GRACE-derived TWS changes, along with streamflow measurements at the Zhimenda gauging station. Results show that multi-source remote sensing information can be extremely valuable in model forcing, calibration, and validation over the poorly gauged region. The simulated SWE and TWS changes and the observed counterparts are highly consistent, showing NSE coefficients higher than 0.8. The results also show that the contributions of snow and glacier meltwater to the total runoff are 8% and 6%, respectively, during the period 2003‒2014, which is an important source of runoff. Moreover, from this study, the TWS is found to increase at a rate of 5 mm/a ( 0.72 Gt/a) for the period 2003‒2014. The snow melting module may overestimate SWE for high precipitation events and was improved in this study. Key words: CREST model; Remote Sensing; Melting model; Source Region of the Yangtze River
Modelling and Characterisation of Detection Models in WAMI for Handling Negative Information
2014-02-01
behaviour of the multi-stage detectors used in LoFT. This model is then used in a Probabilistic Hypothesis Density Filter (PHD). Unlike most multitarget...Therefore, we decided to use machine learning techniques which could model — and pre- dict — the behaviour of the detectors in LoFT. Because we are using...on feature detectors [8], motion models [13] and descriptor and template adaptation [9]. 2.3.2 State Model The state space of LoFT is defined in 2D
High Level Information Fusion (HLIF) with nested fusion loops
NASA Astrophysics Data System (ADS)
Woodley, Robert; Gosnell, Michael; Fischer, Amber
2013-05-01
Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information. Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial developments, numerous models of information fusion have emerged, hoping to better capture the human-centric process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling, and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and repurposed data in a cohesive manner. FURNACE supports analyst's efforts to develop situation models, threat models, and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence markets.
URREF Reliability Versus Credibility in Information Fusion
2013-07-01
Fusion, Vol. 3, No. 2, December, 2008. [31] E. Blasch, J. Dezert, and P. Valin , “DSMT Applied to Seismic and Acoustic Sensor Fusion,” Proc. IEEE Nat...44] E. Blasch, P. Valin , E. Bossé, “Measures of Effectiveness for High- Level Fusion,” Int. Conference on Information Fusion, 2010. [45] X. Mei, H...and P. Valin , “Information Fusion Measures of Effectiveness (MOE) for Decision Support,” Proc. SPIE 8050, 2011. [49] Y. Zheng, W. Dong, and E
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
NASA Astrophysics Data System (ADS)
Wu, M. F.; Sun, Z. C.; Yang, B.; Yu, S. S.
2016-11-01
In order to reduce the “salt and pepper” in pixel-based urban land cover classification and expand the application of fusion of multi-source data in the field of urban remote sensing, WorldView-2 imagery and airborne Light Detection and Ranging (LiDAR) data were used to improve the classification of urban land cover. An approach of object- oriented hierarchical classification was proposed in our study. The processing of proposed method consisted of two hierarchies. (1) In the first hierarchy, LiDAR Normalized Digital Surface Model (nDSM) image was segmented to objects. The NDVI, Costal Blue and nDSM thresholds were set for extracting building objects. (2) In the second hierarchy, after removing building objects, WorldView-2 fused imagery was obtained by Haze-ratio-based (HR) fusion, and was segmented. A SVM classifier was applied to generate road/parking lot, vegetation and bare soil objects. (3) Trees and grasslands were split based on an nDSM threshold (2.4 meter). The results showed that compared with pixel-based and non-hierarchical object-oriented approach, proposed method provided a better performance of urban land cover classification, the overall accuracy (OA) and overall kappa (OK) improved up to 92.75% and 0.90. Furthermore, proposed method reduced “salt and pepper” in pixel-based classification, improved the extraction accuracy of buildings based on LiDAR nDSM image segmentation, and reduced the confusion between trees and grasslands through setting nDSM threshold.
NASA Astrophysics Data System (ADS)
Hu, Rongming; Wang, Shu; Guo, Jiao; Guo, Liankun
2018-04-01
Impervious surface area and vegetation coverage are important biophysical indicators of urban surface features which can be derived from medium-resolution images. However, remote sensing data obtained by a single sensor are easily affected by many factors such as weather conditions, and the spatial and temporal resolution can not meet the needs for soil erosion estimation. Therefore, the integrated multi-source remote sensing data are needed to carry out high spatio-temporal resolution vegetation coverage estimation. Two spatial and temporal vegetation coverage data and impervious data were obtained from MODIS and Landsat 8 remote sensing images. Based on the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), the vegetation coverage data of two scales were fused and the data of vegetation coverage fusion (ESTARFM FVC) and impervious layer with high spatiotemporal resolution (30 m, 8 day) were obtained. On this basis, the spatial variability of the seepage-free surface and the vegetation cover landscape in the study area was measured by means of statistics and spatial autocorrelation analysis. The results showed that: 1) ESTARFM FVC and impermeable surface have higher accuracy and can characterize the characteristics of the biophysical components covered by the earth's surface; 2) The average impervious surface proportion and the spatial configuration of each area are different, which are affected by natural conditions and urbanization. In the urban area of Xi'an, which has typical characteristics of spontaneous urbanization, landscapes are fragmented and have less spatial dependence.
A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers
Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin
2018-01-01
In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348
PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.
Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao
2015-11-06
Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.
NASA Astrophysics Data System (ADS)
Guan, Yan-Qing; Zheng, Zhe; Huang, Zheng; Li, Zhibin; Niu, Shuiqin; Liu, Jun-Ming
2014-05-01
Nanomagnetic materials offer exciting avenues for advancing cancer therapies. Most researches have focused on efficient delivery of drugs in the body by incorporating various drug molecules onto the surface of nanomagnetic particles. The challenge is how to synthesize low toxic nanocarriers with multi-target drug loading. The cancer cell death mechanisms associated with those nanocarriers remain unclear either. Following the cell biology mechanisms, we develop a liquid photo-immobilization approach to attach doxorubicin, folic acid, tumor necrosis factor-α, and interferon-γ onto the oleic acid molecules coated Fe3O4 magnetic nanoparticles to prepare a kind of novel inner/outer controlled multi-target magnetic nanoparticle drug carrier. In this work, this approach is demonstrated by a variety of structural and biomedical characterizations, addressing the anti-cancer effects in vivo and in vitro on the HeLa, and it is highly efficient and powerful in treating cancer cells in a valuable programmed cell death mechanism for overcoming drug resistance.
A Survey of Recent Advances in Particle Filters and Remaining Challenges for Multitarget Tracking
Wang, Xuedong; Sun, Shudong; Corchado, Juan M.
2017-01-01
We review some advances of the particle filtering (PF) algorithm that have been achieved in the last decade in the context of target tracking, with regard to either a single target or multiple targets in the presence of false or missing data. The first part of our review is on remarkable achievements that have been made for the single-target PF from several aspects including importance proposal, computing efficiency, particle degeneracy/impoverishment and constrained/multi-modal systems. The second part of our review is on analyzing the intractable challenges raised within the general multitarget (multi-sensor) tracking due to random target birth and termination, false alarm, misdetection, measurement-to-track (M2T) uncertainty and track uncertainty. The mainstream multitarget PF approaches consist of two main classes, one based on M2T association approaches and the other not such as the finite set statistics-based PF. In either case, significant challenges remain due to unknown tracking scenarios and integrated tracking management. PMID:29168772
Identification and characterization of carprofen as a multi-target FAAH/COX inhibitor
Favia, Angelo D.; Habrant, Damien; Scarpelli, Rita; Migliore, Marco; Albani, Clara; Bertozzi, Sine Mandrup; Dionisi, Mauro; Tarozzo, Glauco; Piomelli, Daniele; Cavalli, Andrea; De Vivo, Marco
2013-01-01
Pain and inflammation are major therapeutic areas for drug discovery. Current drugs for these pathologies have limited efficacy, however, and often cause a number of unwanted side effects. In the present study, we identify the non-steroid anti-inflammatory drug, carprofen, as a multi-target-directed ligand that simultaneously inhibits cyclooxygenase-1 (COX-1), COX-2 and fatty acid amide hydrolase (FAAH). Additionally, we synthesized and tested several racemic derivatives of carprofen, sharing this multi-target activity. This may result in improved analgesic efficacy and reduced side effects (Naidu, et al (2009) J Pharmacol Exp Ther 329, 48-56; Fowler, C.J. et al. (2012) J Enzym Inhib Med Chem Jan 6; Sasso, et al (2012) Pharmacol Res 65, 553). The new compounds are among the most potent multi-target FAAH/COXs inhibitors reported so far in the literature, and thus may represent promising starting points for the discovery of new analgesic and anti-inflammatory drugs. PMID:23043222
Impact of workplace based assessment on doctors' education and performance: a systematic review.
Miller, Alice; Archer, Julian
2010-09-24
To investigate the literature for evidence that workplace based assessment affects doctors' education and performance. Systematic review. The primary data sources were the databases Journals@Ovid, Medline, Embase, CINAHL, PsycINFO, and ERIC. Evidence based reviews (Bandolier, Cochrane Library, DARE, HTA Database, and NHS EED) were accessed and searched via the Health Information Resources website. Reference lists of relevant studies and bibliographies of review articles were also searched. Review methods Studies of any design that attempted to evaluate either the educational impact of workplace based assessment, or the effect of workplace based assessment on doctors' performance, were included. Studies were excluded if the sampled population was non-medical or the study was performed with medical students. Review articles, commentaries, and letters were also excluded. The final exclusion criterion was the use of simulated patients or models rather than real life clinical encounters. Sixteen studies were included. Fifteen of these were non-comparative descriptive or observational studies; the other was a randomised controlled trial. Study quality was mixed. Eight studies examined multisource feedback with mixed results; most doctors felt that multisource feedback had educational value, although the evidence for practice change was conflicting. Some junior doctors and surgeons displayed little willingness to change in response to multisource feedback, whereas family physicians might be more prepared to initiate change. Performance changes were more likely to occur when feedback was credible and accurate or when coaching was provided to help subjects identify their strengths and weaknesses. Four studies examined the mini-clinical evaluation exercise, one looked at direct observation of procedural skills, and three were concerned with multiple assessment methods: all these studies reported positive results for the educational impact of workplace based assessment tools. However, there was no objective evidence of improved performance with these tools. Considering the emphasis placed on workplace based assessment as a method of formative performance assessment, there are few published articles exploring its impact on doctors' education and performance. This review shows that multisource feedback can lead to performance improvement, although individual factors, the context of the feedback, and the presence of facilitation have a profound effect on the response. There is no evidence that alternative workplace based assessment tools (mini-clinical evaluation exercise, direct observation of procedural skills, and case based discussion) lead to improvement in performance, although subjective reports on their educational impact are positive.
Tarkang, Protus Arrey; Appiah-Opong, Regina; Ofori, Michael F; Ayong, Lawrence S; Nyarko, Alexander K
2016-01-01
There is an urgent need for new anti-malaria drugs with broad therapeutic potential and novel mode of action, for effective treatment and to overcome emerging drug resistance. Plant-derived anti-malarials remain a significant source of bioactive molecules in this regard. The multicomponent formulation forms the basis of phytotherapy. Mechanistic reasons for the poly-pharmacological effects of plants constitute increased bioavailability, interference with cellular transport processes, activation of pro-drugs/deactivation of active compounds to inactive metabolites and action of synergistic partners at different points of the same signaling cascade. These effects are known as the multi-target concept. However, due to the intrinsic complexity of natural products-based drug discovery, there is need to rethink the approaches toward understanding their therapeutic effect. This review discusses the multi-target phytotherapeutic concept and its application in biomarker identification using the modified reverse pharmacology - systems biology approach. Considerations include the generation of a product library, high throughput screening (HTS) techniques for efficacy and interaction assessment, High Performance Liquid Chromatography (HPLC)-based anti-malarial profiling and animal pharmacology. This approach is an integrated interdisciplinary implementation of tailored technology platforms coupled to miniaturized biological assays, to track and characterize the multi-target bioactive components of botanicals as well as identify potential biomarkers. While preserving biodiversity, this will serve as a primary step towards the development of standardized phytomedicines, as well as facilitate lead discovery for chemical prioritization and downstream clinical development.
Tonelli, Michele; Catto, Marco; Tasso, Bruno; Novelli, Federica; Canu, Caterina; Iusco, Giovanna; Pisani, Leonardo; Stradis, Angelo De; Denora, Nunzio; Sparatore, Anna; Boido, Vito; Carotti, Angelo; Sparatore, Fabio
2015-06-01
Multitarget therapeutic leads for Alzheimer's disease were designed on the models of compounds capable of maintaining or restoring cell protein homeostasis and of inhibiting β-amyloid (Aβ) oligomerization. Thirty-seven thioxanthen-9-one, xanthen-9-one, naphto- and anthraquinone derivatives were tested for the direct inhibition of Aβ(1-40) aggregation and for the inhibition of electric eel acetylcholinesterase (eeAChE) and horse serum butyrylcholinesterase (hsBChE). These compounds are characterized by basic side chains, mainly quinolizidinylalkyl moieties, linked to various bi- and tri-cyclic (hetero)aromatic systems. With very few exceptions, these compounds displayed inhibitory activity on both AChE and BChE and on the spontaneous aggregation of β-amyloid. In most cases, IC50 values were in the low micromolar and sub-micromolar range, but some compounds even reached nanomolar potency. The time course of amyloid aggregation in the presence of the most active derivative (IC50 =0.84 μM) revealed that these compounds might act as destabilizers of mature fibrils rather than mere inhibitors of fibrillization. Many compounds inhibited one or both cholinesterases and Aβ aggregation with similar potency, a fundamental requisite for the possible development of therapeutics exhibiting a multitarget mechanism of action. The described compounds thus represent interesting leads for the development of multitarget AD therapeutics. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Guan, X.; Shen, H.; Li, X.; Gan, W.
2017-12-01
Mountainous area hosts approximately a quarter of the global land surface, with complex climate and ecosystem conditions. More knowledge about mountainous ecosystem could highly advance our understanding of the global carbon cycle and climate change. Net Primary Productivity (NPP), the biomass increment of plants, is a widely used ecological indicator that can be obtained by remote sensing methods. However, limited by the defective characteristic of sensors, which cannot be long-term with enough spatial details synchronously, the mountainous NPP was far from being understood. In this study, a multi-sensor fusion framework was applied to synthesize a 1-km NPP series from 1982 to 2014 in mountainous southwest China, where elevation ranged from 76m to 6740m. The validation with field-measurements proved this framework greatly improved the accuracy of NPP (r=0.79, p<0.01). The detailed spatial and temporal analysis indicated that NPP variation trends changed from decreasing to increasing with the ascending elevation, as a result of a warmer and drier climate over the region. The correlation of NPP and temperature varied from negative to positive almost at the same elevation break-point of NPP trends, but the opposite for precipitation. This phenomenon was determined by the altitudinal and seasonally uneven allocation of climatic factors, as well as the downward run-off. What is more, it was indicated that the NPP variation showed three distinct stages at the year break-point of 1992 and 2002 over the region. The NPP in low-elevation area varied almost triple more drastic than the high-elevation area for all the three stages, due to the much greater change rate of precipitation. In summary, this study innovatively conducted a long-term and accurate NPP study on the not understood mountainous ecosystem with multi-source data, the framework and conclusions will be beneficial for the further cognition of global climate change.
Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association
Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You
2017-01-01
This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems. PMID:29113085
Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association.
Liu, Yu; Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You
2017-11-05
This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets' state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.
Yu, Miaoyu; Law, Samuel; Dang, Kien; Byrne, Niall
2016-04-01
Psychiatry as a field and undergraduate psychiatry education (UPE) specifically have historically been in the periphery of medicine in China, unlike the relatively central role they occupy in the West. During the current economic reform, Chinese undergraduate medical education (UME) is undergoing significant changes and standardization under the auspices of the national accreditation body. A comparative study, using Bereday's comparative education methodology and Feldmann's evaluative criteria as theoretical frameworks, to gain understanding of the differences and similarities between China and the West in terms of UPE can contribute to the UME reform, and specifically UPE development in China, and promote cross-cultural understanding. The authors employed multi-sourced information to perform a comparative study of UPE, using the University of Toronto as a representative of the western model and Guangxi Medical University, a typical program in China, as the Chinese counterpart. Key contrasts are numerous; highlights include the difference in age and level of education of the entrants to medical school, centrally vs. locally developed UPE curriculum, level of integration with the rest of medical education, visibility within the medical school, adequacy of teaching resources, amount of clinical learning experience, opportunity for supervision and mentoring, and methods of student assessment. Examination of the existing, multi-sourced information reveals some fundamental differences in the current UPE between the representative Chinese and western programs, reflecting historical, political, cultural, and socioeconomic circumstances of the respective settings. The current analyses show some areas worthy of further exploration to inform Chinese UPE reform. The current research is a practical beginning to the development of a deeper collaborative dialogue about psychiatry and its educational underpinnings between China and the West.
Crawling and walking infants encounter objects differently in a multi-target environment.
Dosso, Jill A; Boudreau, J Paul
2014-10-01
From birth, infants move their bodies in order to obtain information and stimulation from their environment. Exploratory movements are important for the development of an infant's understanding of the world and are well established as being key to cognitive advances. Newly acquired motor skills increase the potential actions available to the infant. However, the way that infants employ potential actions in environments with multiple potential targets is undescribed. The current work investigated the target object selections of infants across a range of self-produced locomotor experience (11- to 14-month-old crawlers and walkers). Infants repeatedly accessed objects among pairs of objects differing in both distance and preference status, some requiring locomotion. Overall, their object actions were found to be sensitive to object preference status; however, the role of object distance in shaping object encounters was moderated by movement status. Crawlers' actions appeared opportunistic and were biased towards nearby objects while walkers' actions appeared intentional and were independent of object position. Moreover, walkers' movements favoured preferred objects more strongly for children with higher levels of self-produced locomotion experience. The multi-target experimental situation used in this work parallels conditions faced by foraging organisms, and infants' behaviours were discussed with respect to optimal foraging theory. There is a complex interplay between infants' agency, locomotor experience, and environment in shaping their motor actions. Infants' movements, in turn, determine the information and experiences offered to infants by their micro-environment.
Sound Localization in Multisource Environments
2009-03-01
A total of 7 paid volunteer listeners (3 males and 4 females, 20-25 years of age ) par- ticipated in the experiment. All had normal hearing (i.e...effects of the loudspeaker frequency responses, and were then sent from an experimental control computer to a Mark of the Unicorn (MOTU 24 I/O) digital-to...after the overall multisource stimulus has been presented (the ’post-cue’ condition). 3.2 Methods 3.2.1 Listeners Eight listeners, ranging in age from
A beam optics study of a modular multi-source X-ray tube for novel computed tomography applications
NASA Astrophysics Data System (ADS)
Walker, Brandon J.; Radtke, Jeff; Chen, Guang-Hong; Eliceiri, Kevin W.; Mackie, Thomas R.
2017-10-01
A modular implementation of a scanning multi-source X-ray tube is designed for the increasing number of multi-source imaging applications in computed tomography (CT). An electron beam array coupled with an oscillating magnetic deflector is proposed as a means for producing an X-ray focal spot at any position along a line. The preliminary multi-source model includes three thermionic electron guns that are deflected in tandem by a slowly varying magnetic field and pulsed according to a scanning sequence that is dependent on the intended imaging application. Particle tracking simulations with particle dynamics analysis software demonstrate that three 100 keV electron beams are laterally swept a combined distance of 15 cm over a stationary target with an oscillating magnetic field of 102 G perpendicular to the beam axis. Beam modulation is accomplished using 25 μs pulse widths to a grid electrode with a reverse gate bias of -500 V and an extraction voltage of +1000 V. Projected focal spot diameters are approximately 1 mm for 138 mA electron beams and the stationary target stays within thermal limits for the 14 kW module. This concept could be used as a research platform for investigating high-speed stationary CT scanners, for lowering dose with virtual fan beam formation, for reducing scatter radiation in cone-beam CT, or for other industrial applications.
The Fusion Model of Intelligent Transportation Systems Based on the Urban Traffic Ontology
NASA Astrophysics Data System (ADS)
Yang, Wang-Dong; Wang, Tao
On these issues unified representation of urban transport information using urban transport ontology, it defines the statute and the algebraic operations of semantic fusion in ontology level in order to achieve the fusion of urban traffic information in the semantic completeness and consistency. Thus this paper takes advantage of the semantic completeness of the ontology to build urban traffic ontology model with which we resolve the problems as ontology mergence and equivalence verification in semantic fusion of traffic information integration. Information integration in urban transport can increase the function of semantic fusion, and reduce the amount of data integration of urban traffic information as well enhance the efficiency and integrity of traffic information query for the help, through the practical application of intelligent traffic information integration platform of Changde city, the paper has practically proved that the semantic fusion based on ontology increases the effect and efficiency of the urban traffic information integration, reduces the storage quantity, and improve query efficiency and information completeness.
Chang, S; Wong, K W; Zhang, W; Zhang, Y
1999-08-10
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
NASA Astrophysics Data System (ADS)
Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin
1999-08-01
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
Linifanib--a multi-targeted receptor tyrosine kinase inhibitor and a low molecular weight gelator.
Marlow, Maria; Al-Ameedee, Mohammed; Smith, Thomas; Wheeler, Simon; Stocks, Michael J
2015-04-14
In this study we demonstrate that linifanib, a multi-targeted receptor tyrosine kinase inhibitor, with a key urea containing pharmacophore, self-assembles into a hydrogel in the presence of low amounts of solvent. We demonstrate the role of the urea functional group and that of fluorine substitution on the adjacent aromatic ring in promoting self-assembly. We have also shown that linifanib has superior mechanical strength to two structurally related analogues and hence increased potential for localisation at an injection site for drug delivery applications.
Dynamic Information Collection and Fusion
2015-12-02
AFRL-AFOSR-VA-TR-2016-0069 DYNAMIC INFORMATION COLLECTION AND FUSION Venugopal Veeravalli UNIVERSITY OF ILLINOIS CHAMPAIGN Final Report 12/02/2015...TITLE AND SUBTITLE Dynamic Information Collection and Fusion 5a. CONTRACT NUMBER FA9550-10-1-0458 5b. GRANT NUMBER AF FA9550-10-1-0458 5c. PROGRAM...information collection, fusion , and inference from diverse modalities Our research has been organized under three inter-related thrusts. The first thrust
SDIA: A dynamic situation driven information fusion algorithm for cloud environment
NASA Astrophysics Data System (ADS)
Guo, Shuhang; Wang, Tong; Wang, Jian
2017-09-01
Information fusion is an important issue in information integration domain. In order to form an extensive information fusion technology under the complex and diverse situations, a new information fusion algorithm is proposed. Firstly, a fuzzy evaluation model of tag utility was proposed that can be used to count the tag entropy. Secondly, a ubiquitous situation tag tree model is proposed to define multidimensional structure of information situation. Thirdly, the similarity matching between the situation models is classified into three types: the tree inclusion, the tree embedding, and the tree compatibility. Next, in order to reduce the time complexity of the tree compatible matching algorithm, a fast and ordered tree matching algorithm is proposed based on the node entropy, which is used to support the information fusion by ubiquitous situation. Since the algorithm revolve from the graph theory of disordered tree matching algorithm, it can improve the information fusion present recall rate and precision rate in the situation. The information fusion algorithm is compared with the star and the random tree matching algorithm, and the difference between the three algorithms is analyzed in the view of isomorphism, which proves the innovation and applicability of the algorithm.
Information Fusion - Methods and Aggregation Operators
NASA Astrophysics Data System (ADS)
Torra, Vicenç
Information fusion techniques are commonly applied in Data Mining and Knowledge Discovery. In this chapter, we will give an overview of such applications considering their three main uses. This is, we consider fusion methods for data preprocessing, model building and information extraction. Some aggregation operators (i.e. particular fusion methods) and their properties are briefly described as well.
WE-DE-201-08: Multi-Source Rotating Shield Brachytherapy Apparatus for Prostate Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dadkhah, H; Wu, X; Kim, Y
Purpose: To introduce a novel multi-source rotating shield brachytherapy (RSBT) apparatus for the precise simultaneous angular and linear positioning of all partially-shielded 153Gd radiation sources in interstitial needles for treating prostate cancer. The mechanism is designed to lower the detrimental dose to healthy tissues, the urethra in particular, relative to conventional high-dose-rate brachytherapy (HDR-BT) techniques. Methods: Following needle implantation, the delivery system is docked to the patient template. Each needle is coupled to a multi-source afterloader catheter by a connector passing through a shaft. The shafts are rotated by translating a moving template between two stationary templates. Shaft walls asmore » well as moving template holes are threaded such that the resistive friction produced between the two parts exerts enough force on the shafts to bring about the rotation. Rotation of the shaft is then transmitted to the shielded source via several keys. Thus, shaft angular position is fully correlated with the position of the moving template. The catheter angles are simultaneously incremented throughout treatment as needed, and only a single 360° rotation of all catheters is needed for a full treatment. For each rotation angle, source depth in each needle is controlled by a multi-source afterloader, which is proposed as an array of belt-driven linear actuators, each of which drives a source wire. Results: Optimized treatment plans based on Monte Carlo dose calculations demonstrated RSBT with the proposed apparatus reduced urethral D{sub 1cc} below that of conventional HDR-BT by 35% for urethral dose gradient volume within 3 mm of the urethra surface. Treatment time to deliver 20 Gy with multi-source RSBT apparatus using nineteen 62.4 GBq {sup 153}Gd sources is 117 min. Conclusions: The proposed RSBT delivery apparatus in conjunction with multiple nitinol catheter-mounted platinum-shielded {sup 153}Gd sources enables a mechanically feasible urethra-sparing treatment technique for prostate cancer in a clinically reasonable timeframe.« less
SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, J; Gao, H
2015-06-15
Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of opticalmore » coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the acoustic data. Jing Feng and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Revisiting the JDL Model for Information Exploitation
2013-07-01
High-Level Information Fusion Management and Systems Design, Artech House, Norwood, MA, 2012. [10] E. Blasch, D. A. Lambert, P. Valin , M. M. Kokar...Fusion – Fusion2012 Panel Discussion,” Int. Conf. on Info Fusion, 2012. [29] E. P. Blasch, P. Valin , A-L. Jousselme, et al., “Top Ten Trends in High...P. Valin , E. Bosse, M. Nilsson, J. Van Laere, et al., “Implication of Culture: User Roles in Information Fusion for Enhanced Situational
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Multi-Targeted Agents in Cancer Cell Chemosensitization: What We Learnt from Curcumin Thus Far.
Bordoloi, Devivasha; Roy, Nand K; Monisha, Javadi; Padmavathi, Ganesan; Kunnumakkara, Ajaikumar B
2016-01-01
Research over the past several years has developed many mono-targeted therapies for the prevention and treatment of cancer, but it still remains one of the fatal diseases in the world killing 8.2 million people annually. It has been well-established that development of chemoresistance in cancer cells against mono-targeted chemotherapeutic agents by modulation of multiple survival pathways is the major cause of failure of cancer chemotherapy. Therefore, inhibition of these pathways by non-toxic multi-targeted agents may have profoundly high potential in preventing drug resistance and sensitizing cancer cells to chemotherapeutic agents. To study the potential of curcumin, a multi-targeted natural compound, obtained from the plant Turmeric (Curcuma longa) in combination with standard chemotherapeutic agents to inhibit drug resistance and sensitize cancer cells to these agents based on available literature and patents. An extensive literature survey was performed in PubMed and Google for the chemosensitizing potential of curcumin in different cancers published so far and the patents published during 2014-2015. Our search resulted in many in vitro, in vivo and clinical reports signifying the chemosensitizing potential of curcumin in diverse cancers. There were 160 in vitro studies, 62 in vivo studies and 5 clinical studies. Moreover, 11 studies reported on hybrid curcumin: the next generation of curcumin based therapeutics. Also, 34 patents on curcumin's biological activity have been retrieved. Altogether, the present study reveals the enormous potential of curcumin, a natural, non-toxic, multi-targeted agent in overcoming drug resistance in cancer cells and sensitizing them to chemotherapeutic drugs.
State-of-the-Art: DTM Generation Using Airborne LIDAR Data
Chen, Ziyue; Gao, Bingbo; Devereux, Bernard
2017-01-01
Digital terrain model (DTM) generation is the fundamental application of airborne Lidar data. In past decades, a large body of studies has been conducted to present and experiment a variety of DTM generation methods. Although great progress has been made, DTM generation, especially DTM generation in specific terrain situations, remains challenging. This research introduces the general principles of DTM generation and reviews diverse mainstream DTM generation methods. In accordance with the filtering strategy, these methods are classified into six categories: surface-based adjustment; morphology-based filtering, triangulated irregular network (TIN)-based refinement, segmentation and classification, statistical analysis and multi-scale comparison. Typical methods for each category are briefly introduced and the merits and limitations of each category are discussed accordingly. Despite different categories of filtering strategies, these DTM generation methods present similar difficulties when implemented in sharply changing terrain, areas with dense non-ground features and complicated landscapes. This paper suggests that the fusion of multi-sources and integration of different methods can be effective ways for improving the performance of DTM generation. PMID:28098810
Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.
Kaplan, Adam; Lock, Eric F
2017-01-01
Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.
Implication of Culture: User Roles in Information Fusion for Enhanced Situational Understanding
2009-07-01
situational understanding through assessment of the environment to determine a coherent state of affairs. The information is integrated with knowledge to...Implication of Culture: User Roles in Information Fusion for Enhanced Situational Understanding Erik Blasch Air Force Research Lab 2241...enhanced tacit knowledge understanding by (1) display fusion for data presentation (e.g. cultural segmentation), (2) interactive fusion to allow the
Shadow detection of moving objects based on multisource information in Internet of things
NASA Astrophysics Data System (ADS)
Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian
2017-05-01
Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.
Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272
Bi-level multi-source learning for heterogeneous block-wise missing data.
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-11-15
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.
Processing multisource feedback during residency under the guidance of a non-medical coach
Eckenhausen, Marina A.W.; ten Cate, Olle
2018-01-01
Objectives The present study aimed to investigate residents’ preferences in dealing with personal multi-source feedback (MSF) reports with or without the support of a coach. Methods Residents employed for at least half a year in the study hospital were eligible to participate. All 43 residents opting to discuss their MSF report with a psychologist-coach before discussing results with the program director were included. Semi-structured interviews were conducted following individual coaching sessions. Qualitative and quantitative data were gathered using field notes. Results Seventy-four percent (n= 32) preferred sharing the MFS report always with a coach, 21% (n= 9) if either the feedback or the relationship with the program director was less favorable, and 5% (n=2) saw no difference between discussing with a coach or with the program director. In the final stage of training residents more often preferred the coach (82.6%, n=19) than in the first stages (65%, n=13). Reasons for discussing the report with a coach included her neutral and objective position, her expertise, and the open and safe context during the discussion. Conclusions Most residents preferred discussing multisource feedback results with a coach before their meeting with a program director, particularly if the results were negative. They appeared to struggle with the dual role of the program director (coaching and judging) and appreciated the expertise of a dedicated coach to navigate this confrontation. We encourage residency programs to consider offering residents neutral coaching when processing multisource feedback. PMID:29478041
Multisource Feedback in the Ambulatory Setting
Warm, Eric J.; Schauer, Daniel; Revis, Brian; Boex, James R.
2010-01-01
Background The Accreditation Council for Graduate Medical Education has mandated multisource feedback (MSF) in the ambulatory setting for internal medicine residents. Few published reports demonstrate actual MSF results for a residency class, and fewer still include clinical quality measures and knowledge-based testing performance in the data set. Methods Residents participating in a year-long group practice experience called the “long-block” received MSF that included self, peer, staff, attending physician, and patient evaluations, as well as concomitant clinical quality data and knowledge-based testing scores. Residents were given a rank for each data point compared with peers in the class, and these data were reviewed with the chief resident and program director over the course of the long-block. Results Multisource feedback identified residents who performed well on most measures compared with their peers (10%), residents who performed poorly on most measures compared with their peers (10%), and residents who performed well on some measures and poorly on others (80%). Each high-, intermediate-, and low-performing resident had a least one aspect of the MSF that was significantly lower than the other, and this served as the basis of formative feedback during the long-block. Conclusion Use of multi-source feedback in the ambulatory setting can identify high-, intermediate-, and low-performing residents and suggest specific formative feedback for each. More research needs to be done on the effect of such feedback, as well as the relationships between each of the components in the MSF data set. PMID:21975632
Rochais, Christophe; Lecoutey, Cédric; Gaven, Florence; Giannoni, Patrizia; Hamidouche, Katia; Hedou, Damien; Dubost, Emmanuelle; Genest, David; Yahiaoui, Samir; Freret, Thomas; Bouet, Valentine; Dauphin, François; Sopkova de Oliveira Santos, Jana; Ballandonne, Céline; Corvaisier, Sophie; Malzert-Fréon, Aurélie; Legay, Remi; Boulouard, Michel; Claeysen, Sylvie; Dallemagne, Patrick
2015-04-09
In this work, we describe the synthesis and in vitro evaluation of a novel series of multitarget-directed ligands (MTDL) displaying both nanomolar dual-binding site (DBS) acetylcholinesterase inhibitory effects and partial 5-HT4R agonist activity, among which donecopride was selected for further in vivo evaluations in mice. The latter displayed procognitive and antiamnesic effects and enhanced sAPPα release, accounting for a potential symptomatic and disease-modifying therapeutic benefit in the treatment of Alzheimer's disease.
Optimum Multisensor, Multitarget Localization and Tracking.
1983-06-07
parameter vector t is given by (see Equation (3.5.1-7)’ the simul- taneous solution of A(e) N B G --1 ae &j’ -4n-in (fk’ ijn k3jn ~ k )kjn kjn - knn =1 k...the coefficient of mutual dependence given by M = 12 -(K-2) 121M12 :(3l 11J12 ) (K-2 and Jij is given by (see Equation (6.4.1-2)) - - (_ I R knN kn K...Transactions on Acoustic, Speech and Signal Processing, Vol ASSP-29, No. 3, June 1981. 17. B. Friedlander, "An ARMA Modeling Approach to Multitarget Tracking
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
Study on the multi-sensors monitoring and information fusion technology of dangerous cargo container
NASA Astrophysics Data System (ADS)
Xu, Shibo; Zhang, Shuhui; Cao, Wensheng
2017-10-01
In this paper, monitoring system of dangerous cargo container based on multi-sensors is presented. In order to improve monitoring accuracy, multi-sensors will be applied inside of dangerous cargo container. Multi-sensors information fusion solution of monitoring dangerous cargo container is put forward, and information pre-processing, the fusion algorithm of homogenous sensors and information fusion based on BP neural network are illustrated, applying multi-sensors in the field of container monitoring has some novelty.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
Multi-intelligence critical rating assessment of fusion techniques (MiCRAFT)
NASA Astrophysics Data System (ADS)
Blasch, Erik
2015-06-01
Assessment of multi-intelligence fusion techniques includes credibility of algorithm performance, quality of results against mission needs, and usability in a work-domain context. Situation awareness (SAW) brings together low-level information fusion (tracking and identification), high-level information fusion (threat and scenario-based assessment), and information fusion level 5 user refinement (physical, cognitive, and information tasks). To measure SAW, we discuss the SAGAT (Situational Awareness Global Assessment Technique) technique for a multi-intelligence fusion (MIF) system assessment that focuses on the advantages of MIF against single intelligence sources. Building on the NASA TLX (Task Load Index), SAGAT probes, SART (Situational Awareness Rating Technique) questionnaires, and CDM (Critical Decision Method) decision points; we highlight these tools for use in a Multi-Intelligence Critical Rating Assessment of Fusion Techniques (MiCRAFT). The focus is to measure user refinement of a situation over the information fusion quality of service (QoS) metrics: timeliness, accuracy, confidence, workload (cost), and attention (throughput). A key component of any user analysis includes correlation, association, and summarization of data; so we also seek measures of product quality and QuEST of information. Building a notion of product quality from multi-intelligence tools is typically subjective which needs to be aligned with objective machine metrics.
Information Fusion for Situational Awareness
2003-01-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP021704 TITLE: Information Fusion for Situational Awareness DISTRIBUTION...component part numbers comprise the compilation report: ADP021634 thru ADP021736 UNCLASSIFIED Information Fusion for Situational Awareness Dr. John...Situation Assessment, or level 2 be applied to address Situational Awareness - the processing, the knowledge of objects, their goal of this paper
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
NASA Astrophysics Data System (ADS)
Ni, X. Y.; Huang, H.; Du, W. P.
2017-02-01
The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.
Shi, Z; Ma, X H; Qin, C; Jia, J; Jiang, Y Y; Tan, C Y; Chen, Y Z
2012-02-01
Selective multi-target serotonin reuptake inhibitors enhance antidepressant efficacy. Their discovery can be facilitated by multiple methods, including in silico ones. In this study, we developed and tested an in silico method, combinatorial support vector machines (COMBI-SVMs), for virtual screening (VS) multi-target serotonin reuptake inhibitors of seven target pairs (serotonin transporter paired with noradrenaline transporter, H(3) receptor, 5-HT(1A) receptor, 5-HT(1B) receptor, 5-HT(2C) receptor, melanocortin 4 receptor and neurokinin 1 receptor respectively) from large compound libraries. COMBI-SVMs trained with 917-1951 individual target inhibitors correctly identified 22-83.3% (majority >31.1%) of the 6-216 dual inhibitors collected from literature as independent testing sets. COMBI-SVMs showed moderate to good target selectivity in misclassifying as dual inhibitors 2.2-29.8% (majority <15.4%) of the individual target inhibitors of the same target pair and 0.58-7.1% of the other 6 targets outside the target pair. COMBI-SVMs showed low dual inhibitor false hit rates (0.006-0.056%, 0.042-0.21%, 0.2-4%) in screening 17 million PubChem compounds, 168,000 MDDR compounds, and 7-8181 MDDR compounds similar to the dual inhibitors. Compared with similarity searching, k-NN and PNN methods, COMBI-SVM produced comparable dual inhibitor yields, similar target selectivity, and lower false hit rate in screening 168,000 MDDR compounds. The annotated classes of many COMBI-SVMs identified MDDR virtual hits correlate with the reported effects of their predicted targets. COMBI-SVM is potentially useful for searching selective multi-target agents without explicit knowledge of these agents. Copyright © 2011 Elsevier Inc. All rights reserved.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
The MiPACQ Clinical Question Answering System
Cairns, Brian L.; Nielsen, Rodney D.; Masanz, James J.; Martin, James H.; Palmer, Martha S.; Ward, Wayne H.; Savova, Guergana K.
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system’s architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation. PMID:22195068
The MiPACQ clinical question answering system.
Cairns, Brian L; Nielsen, Rodney D; Masanz, James J; Martin, James H; Palmer, Martha S; Ward, Wayne H; Savova, Guergana K
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system's architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation.
A novel image encryption scheme based on Kepler’s third law and random Hadamard transform
NASA Astrophysics Data System (ADS)
Luo, Yu-Ling; Zhou, Rong-Long; Liu, Jun-Xiu; Qiu, Sen-Hui; Cao, Yi
2017-12-01
Not Available Project supported by the National Natural Science Foundation of China (Grant Nos. 61661008 and 61603104), the Natural Science Foundation of Guangxi Zhuang Autonomous Region, China (Grant Nos. 2015GXNSFBA139256 and 2016GXNSFCA380017), the Funding of Overseas 100 Talents Program of Guangxi Provincial Higher Education, China, the Research Project of Guangxi University of China (Grant No. KY2016YB059), the Guangxi Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS15-07), the Doctoral Research Foundation of Guangxi Normal University, the Guangxi Provincial Experiment Center of Information Science, and the Innovation Project of Guangxi Graduate Education (Grant No. YCSZ2017055).
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699
An innovative thinking-based intelligent information fusion algorithm.
Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
Chen, Yaqi; Chen, Zhui; Wang, Yi
2015-01-01
Screening and identifying active compounds from traditional Chinese medicine (TCM) and other natural products plays an important role in drug discovery. Here, we describe a magnetic beads-based multi-target affinity selection-mass spectrometry approach for screening bioactive compounds from natural products. Key steps and parameters including activation of magnetic beads, enzyme/protein immobilization, characterization of functional magnetic beads, screening and identifying active compounds from a complex mixture by LC/MS, are illustrated. The proposed approach is rapid and efficient in screening and identification of bioactive compounds from complex natural products.
Data association approaches in bearings-only multi-target tracking
NASA Astrophysics Data System (ADS)
Xu, Benlian; Wang, Zhiquan
2008-03-01
According to requirements of time computation complexity and correctness of data association of the multi-target tracking, two algorithms are suggested in this paper. The proposed Algorithm 1 is developed from the modified version of dual Simplex method, and it has the advantage of direct and explicit form of the optimal solution. The Algorithm 2 is based on the idea of Algorithm 1 and rotational sort method, it combines not only advantages of Algorithm 1, but also reduces the computational burden, whose complexity is only 1/ N times that of Algorithm 1. Finally, numerical analyses are carried out to evaluate the performance of the two data association algorithms.
Multi-target drugs: the trend of drug research and development.
Lu, Jin-Jian; Pan, Wei; Hu, Yuan-Jia; Wang, Yi-Tao
2012-01-01
Summarizing the status of drugs in the market and examining the trend of drug research and development is important in drug discovery. In this study, we compared the drug targets and the market sales of the new molecular entities approved by the U.S. Food and Drug Administration from January 2000 to December 2009. Two networks, namely, the target-target and drug-drug networks, have been set up using the network analysis tools. The multi-target drugs have much more potential, as shown by the network visualization and the market trends. We discussed the possible reasons and proposed the rational strategies for drug research and development in the future.
Zhang, Lin; Shan, Yuanyuan; Ji, Xingyue; Zhu, Mengyuan; Li, Chuansheng; Sun, Ying; Si, Ru; Pan, Xiaoyan; Wang, Jinfeng; Ma, Weina; Dai, Bingling; Wang, Binghe; Zhang, Jie
2017-01-01
Receptor tyrosine kinases (RTKs), especially VEGFR-2, TIE-2, and EphB4, play a crucial role in both angiogenesis and tumorigenesis. Moreover, complexity and heterogeneity of angiogenesis make it difficult to treat such pathological traits with single-target agents. Herein, we developed two classes of multi-target RTK inhibitors (RTKIs) based on the highly conserved ATP-binding pocket of VEGFR-2/TIE-2/EphB4, using previously reported BPS-7 as a lead compound. These multi-target RTKIs exhibited considerable potential as novel anti-angiogenic and anticancer agents. Among them, QDAU5 displayed the most promising potency and selectivity. It significantly suppressed viability of EA.hy926 and proliferation of several cancer cells. Further investigations indicated that QDAU5 showed high affinity to VEGFR-2 and reduced the phosphorylation of VEGFR-2. We identified QDAU5 as a potent multiple RTKs inhibitor exhibiting prominent anti-angiogenic and anticancer potency both in vitro and in vivo. Moreover, quinazolin-4(3H)-one has been identified as an excellent hinge binding moiety for multi-target inhibitors of angiogenic VEGFR-2, Tie-2, and EphB4. PMID:29285210
An Approach to Automated Fusion System Design and Adaptation
Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker
2017-01-01
Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum. PMID:28300762
An Approach to Automated Fusion System Design and Adaptation.
Fritze, Alexander; Mönks, Uwe; Holst, Christoph-Alexander; Lohweg, Volker
2017-03-16
Industrial applications are in transition towards modular and flexible architectures that are capable of self-configuration and -optimisation. This is due to the demand of mass customisation and the increasing complexity of industrial systems. The conversion to modular systems is related to challenges in all disciplines. Consequently, diverse tasks such as information processing, extensive networking, or system monitoring using sensor and information fusion systems need to be reconsidered. The focus of this contribution is on distributed sensor and information fusion systems for system monitoring, which must reflect the increasing flexibility of fusion systems. This contribution thus proposes an approach, which relies on a network of self-descriptive intelligent sensor nodes, for the automatic design and update of sensor and information fusion systems. This article encompasses the fusion system configuration and adaptation as well as communication aspects. Manual interaction with the flexibly changing system is reduced to a minimum.
NASA Astrophysics Data System (ADS)
Emmerman, Philip J.
2005-05-01
Teams of robots or mixed teams of warfighters and robots on reconnaissance and other missions can benefit greatly from a local fusion station. A local fusion station is defined here as a small mobile processor with interfaces to enable the ingestion of multiple heterogeneous sensor data and information streams, including blue force tracking data. These data streams are fused and integrated with contextual information (terrain features, weather, maps, dynamic background features, etc.), and displayed or processed to provide real time situational awareness to the robot controller or to the robots themselves. These blue and red force fusion applications remove redundancies, lessen ambiguities, correlate, aggregate, and integrate sensor information with context such as high resolution terrain. Applications such as safety, team behavior, asset control, training, pattern analysis, etc. can be generated or enhanced by these fusion stations. This local fusion station should also enable the interaction between these local units and a global information world.
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
Processing and Fusion of Electro-Optic Information
2001-04-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010886 TITLE: Processing and Fusion of Electro - Optic Information...component part numbers comprise the compilation report: ADP010865 thru ADP010894 UNCLASSIFIED 21-1 Processing and Fusion of Electro - Optic Information I...additional electro - optic (EO) sensor model within OOPSDG. It describes TM IT TT T T T performance estimates found prior to producing the New Ne- New
School adjustment of children in residential care: a multi-source analysis.
Martín, Eduardo; Muñoz de Bustillo, María del Carmen
2009-11-01
School adjustment is one the greatest challenges in residential child care programs. This study has two aims: to analyze school adjustment compared to a normative population, and to carry out a multi-source analysis (child, classmates, and teacher) of this adjustment. A total of 50 classrooms containing 60 children from residential care units were studied. The "Método de asignación de atributos perceptivos" (Allocation of perceptive attributes; Díaz-Aguado, 2006), the "Test Autoevaluativo Multifactorial de Adaptación Infantil" (TAMAI [Multifactor Self-assessment Test of Child Adjustment]; Hernández, 1996) and the "Protocolo de valoración para el profesorado (Evaluation Protocol for Teachers; Fernández del Valle, 1998) were applied. The main results indicate that, compared with their classmates, children in residential care are perceived as more controversial and less integrated at school, although no differences were observed in problems of isolation. The multi-source analysis shows that there is agreement among the different sources when the externalized and visible aspects are evaluated. These results are discussed in connection with the practices that are being developed in residential child care programs.
Binaural segregation in multisource reverberant environments.
Roman, Nicoleta; Srinivasan, Soundararajan; Wang, DeLiang
2006-12-01
In a natural environment, speech signals are degraded by both reverberation and concurrent noise sources. While human listening is robust under these conditions using only two ears, current two-microphone algorithms perform poorly. The psychological process of figure-ground segregation suggests that the target signal is perceived as a foreground while the remaining stimuli are perceived as a background. Accordingly, the goal is to estimate an ideal time-frequency (T-F) binary mask, which selects the target if it is stronger than the interference in a local T-F unit. In this paper, a binaural segregation system that extracts the reverberant target signal from multisource reverberant mixtures by utilizing only the location information of target source is proposed. The proposed system combines target cancellation through adaptive filtering and a binary decision rule to estimate the ideal T-F binary mask. The main observation in this work is that the target attenuation in a T-F unit resulting from adaptive filtering is correlated with the relative strength of target to mixture. A comprehensive evaluation shows that the proposed system results in large SNR gains. In addition, comparisons using SNR as well as automatic speech recognition measures show that this system outperforms standard two-microphone beamforming approaches and a recent binaural processor.
Geospatial Data Fusion and Multigroup Decision Support for Surface Water Quality Management
NASA Astrophysics Data System (ADS)
Sun, A. Y.; Osidele, O.; Green, R. T.; Xie, H.
2010-12-01
Social networking and social media have gained significant popularity and brought fundamental changes to many facets of our everyday life. With the ever-increasing adoption of GPS-enabled gadgets and technology, location-based content is likely to play a central role in social networking sites. While location-based content is not new to the geoscience community, where geographic information systems (GIS) are extensively used, the delivery of useful geospatial data to targeted user groups for decision support is new. Decision makers and modelers ought to make more effective use of the new web-based tools to expand the scope of environmental awareness education, public outreach, and stakeholder interaction. Environmental decision processes are often rife with uncertainty and controversy, requiring integration of multiple sources of information and compromises between diverse interests. Fusing of multisource, multiscale environmental data for multigroup decision support is a challenging task. Toward this goal, a multigroup decision support platform should strive to achieve transparency, impartiality, and timely synthesis of information. The latter criterion often constitutes a major technical bottleneck to traditional GIS-based media, featuring large file or image sizes and requiring special processing before web deployment. Many tools and design patterns have appeared in recent years to ease the situation somewhat. In this project, we explore the use of Web 2.0 technologies for “pushing” location-based content to multigroups involved in surface water quality management and decision making. In particular, our granular bottom-up approach facilitates effective delivery of information to most relevant user groups. Our location-based content includes in-situ and remotely sensed data disseminated by NASA and other national and local agencies. Our project is demonstrated for managing the total maximum daily load (TMDL) program in the Arroyo Colorado coastal river basin in Texas. The overall design focuses on assigning spatial information to decision support elements and on efficiently using Web 2.0 technologies to relay scientific information to the nonscientific community. We conclude that (i) social networking, if appropriately used, has great potential for mitigating difficulty associated with multigroup decision making; (ii) all potential stakeholder groups should be involved in creating a useful decision support system; and (iii) environmental decision support systems should be considered a must-have, instead of an optional component of TMDL decision support projects. Acknowledgment: This project was supported by NASA grant NNX09AR63G.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Knowledge guided information fusion for segmentation of multiple sclerosis lesions in MRI images
NASA Astrophysics Data System (ADS)
Zhu, Chaozhe; Jiang, Tianzi
2003-05-01
In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.
The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion
Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B.; Bono, Christopher M.
2015-01-01
Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity (“low back fusion,” “lumbar fusion,” and “lumbar arthrodesis”), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care. PMID:26933614
The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion.
Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B; Bono, Christopher M
2016-03-01
Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity ("low back fusion," "lumbar fusion," and "lumbar arthrodesis"), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care.
2014-01-01
Background Clear cell sarcoma (CCS) is a therapeutically unresolved, aggressive, soft tissue sarcoma (STS) that predominantly affects young adults. This sarcoma is defined by t(12;22)(q13;q12) translocation, which leads to the fusion of Ewing sarcoma gene (EWS) to activating transcription factor 1 (ATF1) gene, producing a chimeric EWS-ATF1 fusion gene. We established a novel CCS cell line called Hewga-CCS and developed an orthotopic tumor xenograft model to enable comprehensive bench-side investigation for intensive basic and preclinical research in CCS with a paucity of experimental cell lines. Methods Hewga-CCS was derived from skin metastatic lesions of a CCS developed in a 34-year-old female. The karyotype and chimeric transcript were analyzed. Xenografts were established and characterized by morphology and immunohistochemical reactivity. Subsequently, the antitumor effects of pazopanib, a recently approved, novel, multitargeted, tyrosine kinase inhibitor (TKI) used for the treatment of advanced soft tissue sarcoma, on Hewga-CCS were assessed in vitro and in vivo. Results Hewga-CCS harbored the type 2 EWS-ATF1 transcript. Xenografts morphologically mimicked the primary tumor and expressed S-100 protein and antigens associated with melanin synthesis (Melan-A, HMB45). Pazopanib suppressed the growth of Hewga-CCS both in vivo and in vitro. A phospho-receptor tyrosine kinase array revealed phosphorylation of c-MET, but not of VEGFR, in Hewga-CCS. Subsequent experiments showed that pazopanib exerted antitumor effects through the inhibition of HGF/c-MET signaling. Conclusions CCS is a rare, devastating disease, and our established CCS cell line and xenograft model may be a useful tool for further in-depth investigation and understanding of the drug-sensitivity mechanism. PMID:24946937
Outani, Hidetatsu; Tanaka, Takaaki; Wakamatsu, Toru; Imura, Yoshinori; Hamada, Kenichiro; Araki, Nobuhito; Itoh, Kazuyuki; Yoshikawa, Hideki; Naka, Norifumi
2014-06-19
Clear cell sarcoma (CCS) is a therapeutically unresolved, aggressive, soft tissue sarcoma (STS) that predominantly affects young adults. This sarcoma is defined by t(12;22)(q13;q12) translocation, which leads to the fusion of Ewing sarcoma gene (EWS) to activating transcription factor 1 (ATF1) gene, producing a chimeric EWS-ATF1 fusion gene. We established a novel CCS cell line called Hewga-CCS and developed an orthotopic tumor xenograft model to enable comprehensive bench-side investigation for intensive basic and preclinical research in CCS with a paucity of experimental cell lines. Hewga-CCS was derived from skin metastatic lesions of a CCS developed in a 34-year-old female. The karyotype and chimeric transcript were analyzed. Xenografts were established and characterized by morphology and immunohistochemical reactivity. Subsequently, the antitumor effects of pazopanib, a recently approved, novel, multitargeted, tyrosine kinase inhibitor (TKI) used for the treatment of advanced soft tissue sarcoma, on Hewga-CCS were assessed in vitro and in vivo. Hewga-CCS harbored the type 2 EWS-ATF1 transcript. Xenografts morphologically mimicked the primary tumor and expressed S-100 protein and antigens associated with melanin synthesis (Melan-A, HMB45). Pazopanib suppressed the growth of Hewga-CCS both in vivo and in vitro. A phospho-receptor tyrosine kinase array revealed phosphorylation of c-MET, but not of VEGFR, in Hewga-CCS. Subsequent experiments showed that pazopanib exerted antitumor effects through the inhibition of HGF/c-MET signaling. CCS is a rare, devastating disease, and our established CCS cell line and xenograft model may be a useful tool for further in-depth investigation and understanding of the drug-sensitivity mechanism.
Lappala, Anna; Nishima, Wataru; Miner, Jacob; Fenimore, Paul; Fischer, Will; Hraber, Peter; Zhang, Ming; McMahon, Benjamin; Tung, Chang-Shung
2018-05-10
Membrane fusion proteins are responsible for viral entry into host cells—a crucial first step in viral infection. These proteins undergo large conformational changes from pre-fusion to fusion-initiation structures, and, despite differences in viral genomes and disease etiology, many fusion proteins are arranged as trimers. Structural information for both pre-fusion and fusion-initiation states is critical for understanding virus neutralization by the host immune system. In the case of Ebola virus glycoprotein (EBOV GP) and Zika virus envelope protein (ZIKV E), pre-fusion state structures have been identified experimentally, but only partial structures of fusion-initiation states have been described. While the fusion-initiation structure is in an energetically unfavorable state that is difficult to solve experimentally, the existing structural information combined with computational approaches enabled the modeling of fusion-initiation state structures of both proteins. These structural models provide an improved understanding of four different neutralizing antibodies in the prevention of viral host entry.
A computer vision system for the recognition of trees in aerial photographs
NASA Technical Reports Server (NTRS)
Pinz, Axel J.
1991-01-01
Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.
Semiotic foundation for multisensor-multilook fusion
NASA Astrophysics Data System (ADS)
Myler, Harley R.
1998-07-01
This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.
Advances in Multi-Sensor Information Fusion: Theory and Applications 2017.
Jin, Xue-Bo; Sun, Shuli; Wei, Hong; Yang, Feng-Bao
2018-04-11
The information fusion technique can integrate a large amount of data and knowledge representing the same real-world object and obtain a consistent, accurate, and useful representation of that object. The data may be independent or redundant, and can be obtained by different sensors at the same time or at different times. A suitable combination of investigative methods can substantially increase the profit of information in comparison with that from a single sensor. Multi-sensor information fusion has been a key issue in sensor research since the 1970s, and it has been applied in many fields. For example, manufacturing and process control industries can generate a lot of data, which have real, actionable business value. The fusion of these data can greatly improve productivity through digitization. The goal of this special issue is to report innovative ideas and solutions for multi-sensor information fusion in the emerging applications era, focusing on development, adoption, and applications.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
NASA Astrophysics Data System (ADS)
Blasch, Erik; Kadar, Ivan; Hintz, Kenneth; Biermann, Joachim; Chong, Chee-Yee; Salerno, John; Das, Subrata
2007-04-01
Resource management (or process refinement) is critical for information fusion operations in that users, sensors, and platforms need to be informed, based on mission needs, on how to collect, process, and exploit data. To meet these growing concerns, a panel session was conducted at the International Society of Information Fusion Conference in 2006 to discuss the various issues surrounding the interaction of Resource Management with Level 2/3 Situation and Threat Assessment. This paper briefly consolidates the discussion of the invited panel panelists. The common themes include: (1) Addressing the user in system management, sensor control, and knowledge based information collection (2) Determining a standard set of fusion metrics for optimization and evaluation based on the application (3) Allowing dynamic and adaptive updating to deliver timely information needs and information rates (4) Optimizing the joint objective functions at all information fusion levels based on decision-theoretic analysis (5) Providing constraints from distributed resource mission planning and scheduling; and (6) Defining L2/3 situation entity definitions for knowledge discovery, modeling, and information projection
Data fusion approach to threat assessment for radar resources management
NASA Astrophysics Data System (ADS)
Komorniczak, Wojciech; Pietrasinski, Jerzy; Solaiman, Basel
2002-03-01
The paper deals with the problem of the multifunction radar resources management. The problem consists of target/tasks ranking and tasks scheduling. The paper is focused on the target ranking, with the data fusion approach. The data from the radar (object's velocity, range, altitude, direction etc.), IFF system (Identification Friend or Foe) and ESM system (Electronic Support Measures - information concerning threat's electro - magnetic activities) is used to decide of the importance assignment for each detected target. The main problem consists of the multiplicity of various types of the input information. The information from the radar is of the probabilistic or ambiguous imperfection type and the IFF information is of evidential type. To take the advantage of these information sources the advanced data fusion system is necessary. The system should deal with the following situations: fusion of the evidential and fuzzy information, fusion of the evidential information and a'priori information. The paper describes the system which fuses the fuzzy and the evidential information without previous change to the same type of information. It is also described the proposal of using of the dynamic fuzzy qualifiers. The paper shows the results of the preliminary system's tests.
Evaluating the potential of improving residential water balance at building scale.
Agudelo-Vera, Claudia M; Keesman, Karel J; Mels, Adriaan R; Rijnaarts, Huub H M
2013-12-15
Earlier results indicated that, for an average household, self-sufficiency in water supply can be achieved by following the Urban harvest Approach (UHA), in a combination of demand minimization, cascading and multi-sourcing. To achieve these results, it was assumed that all available local resources can be harvested. In reality, however, temporal, spatial and location-bound factors pose limitations to this harvest and, thus, to self-sufficiency. This article investigates potential spatial and temporal limitations to harvest local water resources at building level for the Netherlands, with a focus on indoor demand. Two building types were studied, a free standing house (one four-people household) and a mid-rise apartment flat (28 two-person households). To be able to model yearly water balances, daily patterns considering household occupancy and presence of water using appliances were defined per building type. Three strategies were defined. The strategies include demand minimization, light grey water (LGW) recycling, and rainwater harvesting (multi-sourcing). Recycling and multi-sourcing cater for toilet flushing and laundry machine. Results showed that water saving devices may reduce 30% of the conventional demand. Recycling of LGW can supply 100% of second quality water (DQ2) which represents 36% of the conventional demand or up to 20% of the minimized demand. Rainwater harvesting may supply approximately 80% of the minimized demand in case of the apartment flat and 60% in case of the free standing house. To harvest these potentials, different system specifications, related to the household type, are required. Two constraints to recycle and multi-source were identified, namely i) limitations in the grey water production and available rainfall; and ii) the potential to harvest water as determined by the temporal pattern in water availability, water use, and storage and treatment capacities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
Targeting microbial biofilms: current and prospective therapeutic strategies
Koo, Hyun; Allan, Raymond N; Howlin, Robert P; Hall-Stoodley, Luanne; Stoodley, Paul
2017-01-01
Biofilm formation is a key virulence factor for a wide range of microorganisms that cause chronic infections. The multifactorial nature of biofilm development and drug tolerance imposes great challenges for the use of conventional antimicrobials, and indicates the need for multi-targeted or combinatorial therapies. In this review, we focus on current therapeutic strategies and those that are under development that target vital structural and functional traits of microbial biofilms and drug tolerance mechanisms, including the extracellular matrix and dormant cells. We emphasize strategies that are supported by in vivo or ex vivo studies, highlight emerging biofilm-targeting technologies, and provide a rationale for multi-targeted therapies that are aimed at disrupting the complex biofilm microenvironment. PMID:28944770
Multitarget mixture reduction algorithm with incorporated target existence recursions
NASA Astrophysics Data System (ADS)
Ristic, Branko; Arulampalam, Sanjeev
2000-07-01
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
Information Fusion of Conflicting Input Data.
Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael
2016-10-29
Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μ BalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible.
Information Fusion of Conflicting Input Data
Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael
2016-01-01
Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible. PMID:27801874
Next-generation registries: fusion of data for care, and research.
Mandl, Kenneth D; Edge, Stephen; Malone, Chad; Marsolo, Keith; Natter, Marc D
2013-01-01
Disease-based registries are a critical tool for electronic data capture of high-quality, gold standard data for clinical research as well as for population management in clinical care. Yet, a legacy of significant operational costs, resource requirements, and poor data liquidity have limited their use. Research registries have engendered more than $3 Billion in HHS investment over the past 17 years. Health delivery systems and Accountable Care Organizations are investing heavily in registries to track care quality and follow-up of patient panels. Despite the investment, regulatory and financial models have often enforced a "single purpose" limitation on each registry, restricting the use of data to a pre-defined set of protocols. The need for cost effective, multi-sourced, and widely shareable registry data sets has never been greater, and requires next-generation platforms to robustly support multi-center studies, comparative effectiveness research, post-marketing surveillance and disease management. This panel explores diverse registry efforts, both academic and commercial, that have been implemented in leading-edge clinical, research, and hybrid use cases. Panelists present their experience in these areas as well as lessons learned, challenges addressed, and near innovations and advances.
NASA Astrophysics Data System (ADS)
Blasch, Erik; Kadar, Ivan; Grewe, Lynne L.; Brooks, Richard; Yu, Wei; Kwasinski, Andres; Thomopoulos, Stelios; Salerno, John; Qi, Hairong
2017-05-01
During the 2016 SPIE DSS conference, nine panelists were invited to highlight the trends and opportunities in cyber-physical systems (CPS) and Internet of Things (IoT) with information fusion. The world will be ubiquitously outfitted with many sensors to support our daily living thorough the Internet of Things (IoT), manage infrastructure developments with cyber-physical systems (CPS), as well as provide communication through networked information fusion technology over the internet (NIFTI). This paper summarizes the panel discussions on opportunities of information fusion to the growing trends in CPS and IoT. The summary includes the concepts and areas where information supports these CPS/IoT which includes situation awareness, transportation, and smart grids.
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2009-04-01
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.
Ontology driven integration platform for clinical and translational research
Mirhaji, Parsa; Zhu, Min; Vagnoni, Mattew; Bernstam, Elmer V; Zhang, Jiajie; Smith, Jack W
2009-01-01
Semantic Web technologies offer a promising framework for integration of disparate biomedical data. In this paper we present the semantic information integration platform under development at the Center for Clinical and Translational Sciences (CCTS) at the University of Texas Health Science Center at Houston (UTHSC-H) as part of our Clinical and Translational Science Award (CTSA) program. We utilize the Semantic Web technologies not only for integrating, repurposing and classification of multi-source clinical data, but also to construct a distributed environment for information sharing, and collaboration online. Service Oriented Architecture (SOA) is used to modularize and distribute reusable services in a dynamic and distributed environment. Components of the semantic solution and its overall architecture are described. PMID:19208190
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Nighttime images fusion based on Laplacian pyramid
NASA Astrophysics Data System (ADS)
Wu, Cong; Zhan, Jinhao; Jin, Jicheng
2018-02-01
This paper expounds method of the average weighted fusion, image pyramid fusion, the wavelet transform and apply these methods on the fusion of multiple exposures nighttime images. Through calculating information entropy and cross entropy of fusion images, we can evaluate the effect of different fusion. Experiments showed that Laplacian pyramid image fusion algorithm is suitable for processing nighttime images fusion, it can reduce the halo while preserving image details.
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
Introduction to Remote Sensing Image Registration
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline
2017-01-01
For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications
NASA Astrophysics Data System (ADS)
Gameiro, Isabel; Michalska, Patrycja; Tenti, Giammarco; Cores, Ángel; Buendia, Izaskun; Rojo, Ana I.; Georgakopoulos, Nikolaos D.; Hernández-Guijo, Jesús M.; Teresa Ramos, María; Wells, Geoffrey; López, Manuela G.; Cuadrado, Antonio; Menéndez, J. Carlos; León, Rafael
2017-03-01
The formation of neurofibrillary tangles (NFTs), oxidative stress and neuroinflammation have emerged as key targets for the treatment of Alzheimer’s disease (AD), the most prevalent neurodegenerative disorder. These pathological hallmarks are closely related to the over-activity of the enzyme GSK3β and the downregulation of the defense pathway Nrf2-EpRE observed in AD patients. Herein, we report the synthesis and pharmacological evaluation of a new family of multitarget 2,4-dihydropyrano[2,3-c]pyrazoles as dual GSK3β inhibitors and Nrf2 inducers. These compounds are able to inhibit GSK3β and induce the Nrf2 phase II antioxidant and anti-inflammatory pathway at micromolar concentrations, showing interesting structure-activity relationships. The association of both activities has resulted in a remarkable anti-inflammatory ability with an interesting neuroprotective profile on in vitro models of neuronal death induced by oxidative stress and energy depletion and AD. Furthermore, none of the compounds exhibited in vitro neurotoxicity or hepatotoxicity and hence they had improved safety profiles compared to the known electrophilic Nrf2 inducers. In conclusion, the combination of both activities in this family of multitarget compounds confers them a notable interest for the development of lead compounds for the treatment of AD.
Gameiro, Isabel; Michalska, Patrycja; Tenti, Giammarco; Cores, Ángel; Buendia, Izaskun; Rojo, Ana I.; Georgakopoulos, Nikolaos D.; Hernández-Guijo, Jesús M.; Teresa Ramos, María; Wells, Geoffrey; López, Manuela G.; Cuadrado, Antonio; Menéndez, J. Carlos; León, Rafael
2017-01-01
The formation of neurofibrillary tangles (NFTs), oxidative stress and neuroinflammation have emerged as key targets for the treatment of Alzheimer’s disease (AD), the most prevalent neurodegenerative disorder. These pathological hallmarks are closely related to the over-activity of the enzyme GSK3β and the downregulation of the defense pathway Nrf2-EpRE observed in AD patients. Herein, we report the synthesis and pharmacological evaluation of a new family of multitarget 2,4-dihydropyrano[2,3-c]pyrazoles as dual GSK3β inhibitors and Nrf2 inducers. These compounds are able to inhibit GSK3β and induce the Nrf2 phase II antioxidant and anti-inflammatory pathway at micromolar concentrations, showing interesting structure-activity relationships. The association of both activities has resulted in a remarkable anti-inflammatory ability with an interesting neuroprotective profile on in vitro models of neuronal death induced by oxidative stress and energy depletion and AD. Furthermore, none of the compounds exhibited in vitro neurotoxicity or hepatotoxicity and hence they had improved safety profiles compared to the known electrophilic Nrf2 inducers. In conclusion, the combination of both activities in this family of multitarget compounds confers them a notable interest for the development of lead compounds for the treatment of AD. PMID:28361919
Romero Durán, Francisco J.; Alonso, Nerea; Caamaño, Olga; García-Mera, Xerardo; Yañez, Matilde; Prado-Prado, Francisco J.; González-Díaz, Humberto
2014-01-01
In a multi-target complex network, the links (Lij) represent the interactions between the drug (di) and the target (tj), characterized by different experimental measures (Ki, Km, IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (cj). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%–90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally. PMID:25255029
NASA Astrophysics Data System (ADS)
Rico, Antonio; Noguera, Manuel; Garrido, José Luis; Benghazi, Kawtar; Barjis, Joseph
2016-05-01
Multi-tenant architectures (MTAs) are considered a cornerstone in the success of Software as a Service as a new application distribution formula. Multi-tenancy allows multiple customers (i.e. tenants) to be consolidated into the same operational system. This way, tenants run and share the same application instance as well as costs, which are significantly reduced. Functional needs vary from one tenant to another; either companies from different sectors run different types of applications or, although deploying the same functionality, they do differ in the extent of their complexity. In any case, MTA leaves one major concern regarding the companies' data, their privacy and security, which requires special attention to the data layer. In this article, we propose an extended data model that enhances traditional MTAs in respect of this concern. This extension - called multi-target - allows MT applications to host, manage and serve multiple functionalities within the same multi-tenant (MT) environment. The practical deployment of this approach will allow SaaS vendors to target multiple markets or address different levels of functional complexity and yet commercialise just one single MT application. The applicability of the approach is demonstrated via a case study of a real multi-tenancy multi-target (MT2) implementation, called Globalgest.
Rusnati, Marco; Oreste, Pasqua; Zoppetti, Giorgio; Presta, Marco
2005-01-01
Heparin is a sulphated glycosaminoglycan currently used as an anticoagulant and antithrombotic drug. It consists largely of 2-O-sulphated IdoA not l&r arrow N, 6-O-disulphated GlcN disaccharide units. Other disaccharides containing unsulphated IdoA or GlcA and N-sulphated or N-acetylated GlcN are also present as minor components. This heterogeneity is more pronounced in heparan sulphate (HS), where the low-sulphated disaccharides are the most abundant. Heparin/HS bind to a variety of biologically active polypeptides, including enzymes, growth factors and cytokines, and viral proteins. This capacity can be exploited to design multi-target heparin/HS-derived drugs for pharmacological interventions in a variety of pathologic conditions besides coagulation and thrombosis, including neoplasia and viral infection. The capsular K5 polysaccharide from Escherichia coli has the same structure as the heparin precursor N-acetyl heparosan. The possibility of producing K5 polysaccharide derivatives by chemical and enzymatic modifications, thus generating heparin/HS-like compounds, has been demonstrated. These K5 polysaccharide derivatives are endowed with different biological properties, including anticoagulant/antithrombotic, antineoplastic, and anti-AIDS activities. Here, the literature data are discussed and the possible therapeutic implications for this novel class of multi-target "biotechnological heparin/HS" molecules are outlined.
Green fluorescence protein-based content-mixing assay of SNARE-driven membrane fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heo, Paul; Kong, Byoungjae; Jung, Young-Hun
Soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) proteins mediate intracellular membrane fusion by forming a ternary SNARE complex. A minimalist approach utilizing proteoliposomes with reconstituted SNARE proteins yielded a wealth of information pinpointing the molecular mechanism of SNARE-mediated fusion and its regulation by accessory proteins. Two important attributes of a membrane fusion are lipid-mixing and the formation of an aqueous passage between apposing membranes. These two attributes are typically observed by using various fluorescent dyes. Currently available in vitro assay systems for observing fusion pore opening have several weaknesses such as cargo-bleeding, incomplete removal of unencapsulated dyes, and inadequate information regardingmore » the size of the fusion pore, limiting measurements of the final stage of membrane fusion. In the present study, we used a biotinylated green fluorescence protein and streptavidin conjugated with Dylight 594 (DyStrp) as a Föster resonance energy transfer (FRET) donor and acceptor, respectively. This FRET pair encapsulated in each v-vesicle containing synaptobrevin and t-vesicle containing a binary acceptor complex of syntaxin 1a and synaptosomal-associated protein 25 revealed the opening of a large fusion pore of more than 5 nm, without the unwanted signals from unencapsulated dyes or leakage. This system enabled determination of the stoichiometry of the merging vesicles because the FRET efficiency of the FRET pair depended on the molar ratio between dyes. Here, we report a robust and informative assay for SNARE-mediated fusion pore opening. - Highlights: • SNARE proteins drive membrane fusion and open a pore for cargo release. • Biotinylated GFP and DyStrp was used as the reporter pair of fusion pore opening. • Procedure for efficient SNARE reconstitution and reporter encapsulation was established. • The FRET pair reported opening of a large fusion pore bigger than 5 nm. • The assay was robust and provided information of stoichiometry of vesicle fusion.« less
Design and application of BIM based digital sand table for construction management
NASA Astrophysics Data System (ADS)
Fuquan, JI; Jianqiang, LI; Weijia, LIU
2018-05-01
This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.
NASA Astrophysics Data System (ADS)
Vieira, João; da Conceição Cunha, Maria
2017-04-01
A multi-objective decision model has been developed to identify the Pareto-optimal set of management alternatives for the conjunctive use of surface water and groundwater of a multisource urban water supply system. A multi-objective evolutionary algorithm, Borg MOEA, is used to solve the multi-objective decision model. The multiple solutions can be shown to stakeholders allowing them to choose their own solutions depending on their preferences. The multisource urban water supply system studied here is dependent on surface water and groundwater and located in the Algarve region, southernmost province of Portugal, with a typical warm Mediterranean climate. The rainfall is low, intermittent and concentrated in a short winter, followed by a long and dry period. A base population of 450 000 inhabitants and visits by more than 13 million tourists per year, mostly in summertime, turns water management critical and challenging. Previous studies on single objective optimization after aggregating multiple objectives together have already concluded that only an integrated and interannual water resources management perspective can be efficient for water resource allocation in this drought prone region. A simulation model of the multisource urban water supply system using mathematical functions to represent the water balance in the surface reservoirs, the groundwater flow in the aquifers, and the water transport in the distribution network with explicit representation of water quality is coupled with Borg MOEA. The multi-objective problem formulation includes five objectives. Two objective evaluate separately the water quantity and the water quality supplied for the urban use in a finite time horizon, one objective calculates the operating costs, and two objectives appraise the state of the two water sources - the storage in the surface reservoir and the piezometric levels in aquifer - at the end of the time horizon. The decision variables are the volume of withdrawals from each water source in each time step (i.e., reservoir diversion and groundwater pumping). The results provide valuable information for analysing the impacts of the conjunctive use of surface water and groundwater. For example, considering a drought scenario, the results show how the same level of total water supplied can be achieved by different management alternatives with different impact on the water quality, costs, and the state of the water sources at the end of the time horizon. The results allow also the clear understanding of the potential benefits from the conjunctive use of surface water and groundwater thorough the mitigation of the variation in the availability of surface water, improving the water quantity and/or water quality delivered to the users, or the better adaptation of such systems to a changing world.
Wu, Peng; Huang, Yiyin; Kang, Longtian; Wu, Maoxiang; Wang, Yaobing
2015-01-01
A series of palladium-based catalysts of metal alloying (Sn, Pb) and/or (N-doped) graphene support with regular enhanced electrocatalytic activity were investigated. The peak current density (118.05 mA cm−2) of PdSn/NG is higher than the sum current density (45.63 + 47.59 mA cm−2) of Pd/NG and PdSn/G. It reveals a synergistic electrocatalytic oxidation effect in PdSn/N-doped graphene Nanocomposite. Extend experiments show this multisource synergetic catalytic effect of metal alloying and N-doped graphene support in one catalyst on small organic molecule (methanol, ethanol and Ethylene glycol) oxidation is universal in PdM(M = Sn, Pb)/NG catalysts. Further, The high dispersion of small nanoparticles, the altered electron structure and Pd(0)/Pd(II) ratio of Pd in catalysts induced by strong coupled the metal alloying and N-doped graphene are responsible for the multisource synergistic catalytic effect in PdM(M = Sn, Pb) /NG catalysts. Finally, the catalytic durability and stability are also greatly improved. PMID:26434949
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timofeev, Andrey V.; Egorov, Dmitry V.
This paper presents new results concerning selection of an optimal information fusion formula for an ensemble of Lipschitz classifiers. The goal of information fusion is to create an integral classificatory which could provide better generalization ability of the ensemble while achieving a practically acceptable level of effectiveness. The problem of information fusion is very relevant for data processing in multi-channel C-OTDR-monitoring systems. In this case we have to effectively classify targeted events which appear in the vicinity of the monitored object. Solution of this problem is based on usage of an ensemble of Lipschitz classifiers each of which corresponds tomore » a respective channel. We suggest a brand new method for information fusion in case of ensemble of Lipschitz classifiers. This method is called “The Weighing of Inversely as Lipschitz Constants” (WILC). Results of WILC-method practical usage in multichannel C-OTDR monitoring systems are presented.« less
2016-01-01
planning exercises and wargaming. Initial Experimentation Late in the research , the prototype platform and the various fusion methods came together. This...Chapter Four points to prior research 2 Uncertainty-Sensitive Heterogeneous Information Fusion in mind multimethod fusing of complex information...our research is assessing the threat of terrorism posed by individuals or groups under scrutiny. Broadly, the ultimate objec- tives, which go well
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Silk, Kami J; Perrault, Evan K; Nazione, Samantha; Pace, Kristin; Hager, Polly; Springer, Steven
2013-12-01
The current study reports findings from evaluation research conducted to identify how online prostate cancer treatment decision-making information can be both improved and more effectively disseminated to those who need it most. A multi-method, multi-target approach was used and guided by McGuire's Communication Matrix Model. Focus groups (n = 31) with prostate cancer patients and their family members, and in-depth interviews with physicians (n = 8), helped inform a web survey (n = 89). Results indicated that physicians remain a key information source for medical advice and the Internet is a primary channel used to help make informed prostate cancer treatment decisions. Participants reported a need for more accessible information related to treatment options and treatment side effects. Additionally, physicians indicated that the best way for agencies to reach them with new information to deliver to patients is by contacting them directly and meeting with them one-on-one. Advice for organizations to improve their current prostate cancer web offerings and further ways to improve information dissemination are discussed.
Molecular targeted therapies for solid tumors: management of side effects.
Grünwald, Viktor; Soltau, Jens; Ivanyi, Philipp; Rentschler, Jochen; Reuter, Christoph; Drevs, Joachim
2009-03-01
This review will provide physicians and oncologists with an overview of side effects related to targeted agents that inhibit vascular endothelial growth factor (VEGF), epidermal growth factor (EGF) and mammalian target of rapamycin (mTOR) signaling in the treatment of solid tumors. Such targeted agents can be divided into monoclonal antibodies, tyrosine kinase inhibitors, multitargeted tyrosine kinase inhibitors and serine/threonine kinase inhibitors. Molecular targeted therapies are generally well tolerated, but inhibitory effects on the biological function of the targets in healthy tissue can result in specific treatment-related side effects, particularly with multitargeted agents. We offer some guidance on how to manage adverse events in cancer patients based on the range of options currently available. Copyright 2009 S. Karger AG, Basel.
Dual/multitargeted xanthone derivatives for Alzheimer's disease: where do we stand?
Cruz, Maria I; Cidade, Honorina; Pinto, Madalena
2017-09-01
To date, the current therapy for Alzheimer's disease (AD) based on acetylcholinesterase inhibitors is only symptomatic, being its efficacy limited. Hence, the recent research has been focused in the development of different pharmacological approaches. Here we discuss the potential of xanthone derivatives as new anti-Alzheimer agents. The interference of xanthone derivatives with acetylcholinesterase and other molecular targets and cellular mechanisms associated with AD have been recently systematically reported. Therefore, we report xanthones with anticholinesterase, monoamine oxidase and amyloid β aggregation inhibitory activities as well as antioxidant properties, emphasizing xanthone derivatives with dual/multitarget activity as potential agents to treat AD. We also propose the structural features for these activities that may guide the design of new, more effective xanthone derivatives. [Formula: see text].
Zhang, Jing-Jing; Muenzner, Julienne K; Abu El Maaty, Mohamed A; Karge, Bianka; Schobert, Rainer; Wölfl, Stefan; Ott, Ingo
2016-08-16
A rhodium(i) and a ruthenium(ii) complex with a caffeine derived N-heterocyclic carbene (NHC) ligand were biologically investigated as organometallic conjugates consisting of a metal center and a naturally occurring moiety. While the ruthenium(ii) complex was largely inactive, the rhodium(i) NHC complex displayed selective cytotoxicity and significant anti-metastatic and in vivo anti-vascular activities and acted as both a mammalian and an E. coli thioredoxin reductase inhibitor. In HCT-116 cells it increased the reactive oxygen species level, leading to DNA damage, and it induced cell cycle arrest, decreased the mitochondrial membrane potential, and triggered apoptosis. This rhodium(i) NHC derivative thus represents a multi-target compound with promising anti-cancer potential.
SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K; Kim, D; Kim, T
2015-06-15
Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array whichmore » have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less
NASA Astrophysics Data System (ADS)
Guarnieri, A.; Masiero, A.; Piragnolo, M.; Pirotti, F.; Vettore, A.
2016-06-01
In this paper we present the results of the development of a Web-based archiving and documenting system aimed to the management of multisource and multitemporal data related to cultural heritage. As case study we selected the building complex of Villa Revedin Bolasco in Castefranco Veneto (Treviso, Italy) and its park. Buildings and park were built in XIX century after several restorations of the original XIV century area. The data management system relies on a geodatabase framework, in which different kinds of datasets were stored. More specifically, the geodatabase elements consist of historical information, documents, descriptions of artistic characteristics of the building and the park, in the form of text and images. In addition, we used also floorplans, sections and views of the outer facades of the building extracted by a TLS-based 3D model of the whole Villa. In order to manage and explore these rich dataset, we developed a geodatabase using PostgreSQL and PostGIS as spatial plugin. The Web-GIS platform, based on HTML5 and PHP programming languages, implements the NASA Web World Wind virtual globe, a 3D virtual globe we used to enable the navigation and interactive exploration of the park. Furthermore, through a specific timeline function, the user can explore the historical evolution of the building complex.
Remote Sensing Data Fusion to Detect Illicit Crops and Unauthorized Airstrips
NASA Astrophysics Data System (ADS)
Pena, J. A.; Yumin, T.; Liu, H.; Zhao, B.; Garcia, J. A.; Pinto, J.
2018-04-01
Remote sensing data fusion has been playing a more and more important role in crop planting area monitoring, especially for crop area information acquisition. Multi-temporal data and multi-spectral time series are two major aspects for improving crop identification accuracy. Remote sensing fusion provides high quality multi-spectral and panchromatic images in terms of spectral and spatial information, respectively. In this paper, we take one step further and prove the application of remote sensing data fusion in detecting illicit crop through LSMM, GOBIA, and MCE analyzing of strategic information. This methodology emerges as a complementary and effective strategy to control and eradicate illicit crops.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
NASA Technical Reports Server (NTRS)
Freeman, Anthony; Dubois, Pascale; Leberl, Franz; Norikane, L.; Way, Jobea
1991-01-01
Viewgraphs on Geographic Information System for fusion and analysis of high-resolution remote sensing and ground truth data are presented. Topics covered include: scientific objectives; schedule; and Geographic Information System.
2008-03-01
amount of arriving data, extract actionable information, and integrate it with prior knowledge. Add to that the pressures of today’s fusion center...information, and integrate it with prior knowledge. Add to that the pressures of today’s fusion center climate and it becomes clear that analysts, police... fusion centers, including specifics about how these problems manifest at the Illinois State Police (ISP) Statewide Terrorism and Intelligence Center
Different source image fusion based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Xiao; Piao, Yan
2016-03-01
The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.
Ontological Issues in Higher Levels of Information Fusion: User Refinement of the Fusion Process
2003-01-01
fusion question, the thing that is separates the Greek We explore the higher-level purpose offusion systems by philosophical questions and modem day...the The Greeks focused on both data fusion and the Fusion02 conference there are common fusion questions philosophical questions of an ontology - the...data World of Visible Things Belief (pistis) fusion - user refinement. The rest of the paper is as Appearances follows: Section 2 details the Greek
Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.
Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing
2012-04-01
This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
Quinazoline derivatives as potential anticancer agents: a patent review (2007 - 2010).
Marzaro, Giovanni; Guiotto, Adriano; Chilin, Adriana
2012-03-01
Due to the increase in knowledge about cancer pathways, there is a growing interest in finding novel potential drugs. Quinazoline is one of the most widespread scaffolds amongst bioactive compounds. A number of patents and papers appear in the literature regarding the discovery and development of novel promising quinazoline compounds for cancer chemotherapy. Although there is a progressive decrease in the number of patents filed, there is an increasing number of biochemical targets for quinazoline compounds. This paper provides a comprehensive review of the quinazolines patented in 2007 - 2010 as potential anticancer agents. Information from articles published in international peer-reviewed journals was also included, to give a more exhaustive overview. From about 1995 to 2006, the anticancer quinazolines panorama has been dominated by the 4-anilinoquinazolines as tyrosine kinase inhibitors. The extensive researches conducted in this period could have caused the progressive reduction in the ability to file novel patents as shown in the 2007 - 2010 period. However, the growing knowledge of cancer-related pathways has recently highlighted some novel potential targets for therapy, with quinazolines receiving increasing attention. This is well demonstrated by the number of different targets of the patents considered in this review. The structural heterogeneity in the patented compounds makes it difficult to derive general pharmacophores and make comparisons among claimed compounds. On the other hand, the identification of multi-target compounds seems a reliable goal. Thus, it is reasonable that quinazoline compounds will be studied and developed for multi-target therapies.
Armour, Brianna L; Barnes, Steve R; Moen, Spencer O; Smith, Eric; Raymond, Amy C; Fairman, James W; Stewart, Lance J; Staker, Bart L; Begley, Darren W; Edwards, Thomas E; Lorimer, Donald D
2013-06-28
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year (1). Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans (2). Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains.
Information fusion based optimal control for large civil aircraft system.
Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen
2015-03-01
Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Performance Evaluation Modeling of Network Sensors
NASA Technical Reports Server (NTRS)
Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.
2003-01-01
Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.
Retrieval of biophysical parameters with AVIRIS and ISM: The Landes Forest, south west France
NASA Technical Reports Server (NTRS)
Zagolski, F.; Gastellu-Etchegorry, J. P.; Mougin, E.; Giordano, G.; Marty, G.; Letoan, T.; Beaudoin, A.
1992-01-01
The first steps of an experiment for investigating the capability of airborne spectrometer data for retrieval of biophysical parameters of vegetation, especially water conditions are presented. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and ISM data were acquired in the frame of the 1991 NASA/JPL and CNES campaigns on the Landes, South west France, a large and flat forest area with mainly maritime pines. In-situ measurements were completed at that time; i.e. reflectance spectra, atmospheric profiles, sampling for further laboratory analyses of elements concentrations (lignin, water, cellulose, nitrogen,...). All information was integrated in an already existing data base (age, LAI, DBH, understory cover,...). A methodology was designed for (1) obtaining geometrically and atmospherically corrected reflectance data, (2) registering all available information, and (3) analyzing these multi-source informations. Our objective is to conduct comparative studies with simulation reflectance models, and to improve these models, especially in the MIR.
Vitale, Rosa Maria; Rispoli, Vincenzo; Desiderio, Doriana; Sgammato, Roberta; Thellung, Stefano; Canale, Claudio; Vassalli, Massimo; Carbone, Marianna; Ciavatta, Maria Letizia; Mollo, Ernesto; Felicità, Vera; Arcone, Rosaria; Gavagnin Capoggiani, Margherita; Masullo, Mariorosario; Florio, Tullio; Amodeo, Pietro
2018-03-07
Multitargeting or polypharmacological approaches, looking for single chemical entities retaining the ability to bind two or more molecular targets, are a potentially powerful strategy to fight complex, multifactorial pathologies. Unfortunately, the search for multiligand agents is challenging because only a small subset of molecules contained in molecular databases are bioactive and even fewer are active on a preselected set of multiple targets. However, collections of natural compounds feature a significantly higher fraction of bioactive molecules than synthetic ones. In this view, we searched our library of 1175 natural compounds from marine sources for molecules including a 2-aminoimidazole+aromatic group motif, found in known compounds active on single relevant targets for Alzheimer's disease (AD). This identified two molecules, a pseudozoanthoxanthin (1) and a bromo-pyrrole alkaloid (2), which were predicted by a computational approach to possess interesting multitarget profiles on AD target proteins. Biochemical assays experimentally confirmed their biological activities. The two compounds inhibit acetylcholinesterase, butyrylcholinesterase, and β-secretase enzymes in high- to sub-micromolar range. They are also able to prevent and revert β-amyloid (Aβ) aggregation of both Aβ 1-40 and Aβ 1-42 peptides, with 1 being more active than 2. Preliminary in vivo studies suggest that compound 1 is able to restore cholinergic cortico-hippocampal functional connectivity.
Romero Durán, Francisco J; Alonso, Nerea; Caamaño, Olga; García-Mera, Xerardo; Yañez, Matilde; Prado-Prado, Francisco J; González-Díaz, Humberto
2014-09-24
In a multi-target complex network, the links (L(ij)) represent the interactions between the drug (d(i)) and the target (t(j)), characterized by different experimental measures (K(i), K(m), IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (c(j)). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%-90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally.
Multi-Targeted Antithrombotic Therapy for Total Artificial Heart Device Patients.
Ramirez, Angeleah; Riley, Jeffrey B; Joyce, Lyle D
2016-03-01
To prevent thrombotic or bleeding events in patients receiving a total artificial heart (TAH), agents have been used to avoid adverse events. The purpose of this article is to outline the adoption and results of a multi-targeted antithrombotic clinical procedure guideline (CPG) for TAH patients. Based on literature review of TAH anticoagulation and multiple case series, a CPG was designed to prescribe the use of multiple pharmacological agents. Total blood loss, Thromboelastograph(®) (TEG), and platelet light-transmission aggregometry (LTA) measurements were conducted on 13 TAH patients during the first 2 weeks of support in our institution. Target values and actual medians for postimplant days 1, 3, 7, and 14 were calculated for kaolinheparinase TEG, kaolin TEG, LTA, and estimated blood loss. Protocol guidelines were followed and anticoagulation management reduced bleeding and prevented thrombus formation as well as thromboembolic events in TAH patients postimplantation. The patients in this study were susceptible to a variety of possible complications such as mechanical device issues, thrombotic events, infection, and bleeding. Among them all it was clear that patients were at most risk for bleeding, particularly on postoperative days 1 through 3. However, bleeding was reduced into postoperative days 3 and 7, indicating that acceptable hemostasis was achieved with the anticoagulation protocol. The multidisciplinary, multi-targeted anticoagulation clinical procedure guideline was successful to maintain adequate antithrombotic therapy for TAH patients.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
INFORM Lab: a testbed for high-level information fusion and resource management
NASA Astrophysics Data System (ADS)
Valin, Pierre; Guitouni, Adel; Bossé, Eloi; Wehn, Hans; Happe, Jens
2011-05-01
DRDC Valcartier and MDA have created an advanced simulation testbed for the purpose of evaluating the effectiveness of Network Enabled Operations in a Coastal Wide Area Surveillance situation, with algorithms provided by several universities. This INFORM Lab testbed allows experimenting with high-level distributed information fusion, dynamic resource management and configuration management, given multiple constraints on the resources and their communications networks. This paper describes the architecture of INFORM Lab, the essential concepts of goals and situation evidence, a selected set of algorithms for distributed information fusion and dynamic resource management, as well as auto-configurable information fusion architectures. The testbed provides general services which include a multilayer plug-and-play architecture, and a general multi-agent framework based on John Boyd's OODA loop. The testbed's performance is demonstrated on 2 types of scenarios/vignettes for 1) cooperative search-and-rescue efforts, and 2) a noncooperative smuggling scenario involving many target ships and various methods of deceit. For each mission, an appropriate subset of Canadian airborne and naval platforms are dispatched to collect situation evidence, which is fused, and then used to modify the platform trajectories for the most efficient collection of further situation evidence. These platforms are fusion nodes which obey a Command and Control node hierarchy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A.; Grote, D. P.; Vay, J. L.
2015-05-29
The Fusion Energy Sciences Advisory Committee’s subcommittee on non-fusion applications (FESAC NFA) is conducting a survey to obtain information from the fusion community about non-fusion work that has resulted from their DOE-funded fusion research. The subcommittee has requested that members of the community describe recent developments connected to the activities of the DOE Office of Fusion Energy Sciences. Two questions in particular were posed by the subcommittee. This document contains the authors’ responses to those questions.
Energy Harvesting Research: The Road from Single Source to Multisource.
Bai, Yang; Jantunen, Heli; Juuti, Jari
2018-06-07
Energy harvesting technology may be considered an ultimate solution to replace batteries and provide a long-term power supply for wireless sensor networks. Looking back into its research history, individual energy harvesters for the conversion of single energy sources into electricity are developed first, followed by hybrid counterparts designed for use with multiple energy sources. Very recently, the concept of a truly multisource energy harvester built from only a single piece of material as the energy conversion component is proposed. This review, from the aspect of materials and device configurations, explains in detail a wide scope to give an overview of energy harvesting research. It covers single-source devices including solar, thermal, kinetic and other types of energy harvesters, hybrid energy harvesting configurations for both single and multiple energy sources and single material, and multisource energy harvesters. It also includes the energy conversion principles of photovoltaic, electromagnetic, piezoelectric, triboelectric, electrostatic, electrostrictive, thermoelectric, pyroelectric, magnetostrictive, and dielectric devices. This is one of the most comprehensive reviews conducted to date, focusing on the entire energy harvesting research scene and providing a guide to seeking deeper and more specific research references and resources from every corner of the scientific community. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method
NASA Astrophysics Data System (ADS)
Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao
2016-09-01
To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
Lyness, Karen S; Judiesch, Michael K
2008-07-01
The present study was the first cross-national examination of whether managers who were perceived to be high in work-life balance were expected to be more or less likely to advance in their careers than were less balanced, more work-focused managers. Using self ratings, peer ratings, and supervisor ratings of 9,627 managers in 33 countries, the authors examined within-source and multisource relationships with multilevel analyses. The authors generally found that managers who were rated higher in work-life balance were rated higher in career advancement potential than were managers who were rated lower in work-life balance. However, national gender egalitarianism, measured with Project GLOBE scores, moderated relationships based on supervisor and self ratings, with stronger positive relationships in low egalitarian cultures. The authors also found 3-way interactions of work-life balance ratings, ratee gender, and gender egalitarianism in multisource analyses in which self balance ratings predicted supervisor and peer ratings of advancement potential. Work-life balance ratings were positively related to advancement potential ratings for women in high egalitarian cultures and men in low gender egalitarian cultures, but relationships were nonsignificant for men in high egalitarian cultures and women in low egalitarian cultures.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase
Lu, Kelin; Zhou, Rui
2016-01-01
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883
Spatial Statistical Data Fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Nguyen, Hai
2010-01-01
Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.
Lu, Kelin; Zhou, Rui
2016-08-15
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.
NASA Astrophysics Data System (ADS)
Lefebvre, Eric; Helleur, Christopher; Kashyap, Nathan
2008-03-01
Maritime surveillance of coastal regions requires operational staff to integrate a large amount of information from a variety of military and civilian sources. The diverse nature of the information sources makes complete automation difficult. The volume of vessels tracked and the number of sources makes it difficult for the limited operation centre staff to fuse all the information manually within a reasonable timeframe. In this paper, a conceptual decision space is proposed to provide a framework for automating the process of operators integrating the sources needed to maintain Maritime Domain Awareness. The decision space contains all potential pairs of ship tracks that are candidates for fusion. The location of the candidate pairs in this defined space depends on the value of the parameters used to make a decision. In the application presented, three independent parameters are used: the source detection efficiency, the geo-feasibility, and the track quality. One of three decisions is applied to each candidate track pair based on these three parameters: 1. to accept the fusion, in which case tracks are fused in one track, 2. to reject the fusion, in which case the candidate track pair is removed from the list of potential fusion, and 3. to defer the fusion, in which case no fusion occurs but the candidate track pair remains in the list of potential fusion until sufficient information is provided. This paper demonstrates in an operational setting how a proposed conceptual space is used to optimize the different thresholds for automatic fusion decision while minimizing the list of unresolved cases when the decision is left to the operator.
Targeted Therapy Shows Benefit in Rare Type of Thyroid Cancer
Treatment with the multitargeted agent vandetanib (Caprelsa) improved progression-free survival in patients with medullary thyroid cancer (MTC), according to findings from a randomized clinical trial.
Bautista-Aguilera, Óscar M; Hagenow, Stefanie; Palomino-Antolin, Alejandra; Farré-Alins, Víctor; Ismaili, Lhassane; Joffrin, Pierre-Louis; Jimeno, María L; Soukup, Ondřej; Janočková, Jana; Kalinowsky, Lena; Proschak, Ewgenij; Iriepa, Isabel; Moraleda, Ignacio; Schwed, Johannes S; Romero Martínez, Alejandro; López-Muñoz, Francisco; Chioua, Mourad; Egea, Javier; Ramsay, Rona R; Marco-Contelles, José; Stark, Holger
2017-10-02
The therapy of complex neurodegenerative diseases requires the development of multitarget-directed drugs (MTDs). Novel indole derivatives with inhibitory activity towards acetyl/butyrylcholinesterases and monoamine oxidases A/B as well as the histamine H 3 receptor (H3R) were obtained by optimization of the neuroprotectant ASS234 by incorporating generally accepted H3R pharmacophore motifs. These small-molecule hits demonstrated balanced activities at the targets, mostly in the nanomolar concentration range. Additional in vitro studies showed antioxidative neuroprotective effects as well as the ability to penetrate the blood-brain barrier. With this promising in vitro profile, contilisant (at 1 mg kg -1 i.p.) also significantly improved lipopolysaccharide-induced cognitive deficits. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Polypharmacology Shakes Hands with Complex Aetiopathology.
Brodie, James S; Di Marzo, Vincenzo; Guy, Geoffrey W
2015-12-01
Chronic diseases are due to deviations of fundamental physiological systems, with different pathologies being characterised by similar malfunctioning biological networks. The ensuing compensatory mechanisms may weaken the body's dynamic ability to respond to further insults and reduce the efficacy of conventional single target treatments. The multitarget, systemic, and prohomeostatic actions emerging for plant cannabinoids exemplify what might be needed for future medicines. Indeed, two combined cannabis extracts were approved as a single medicine (Sativex(®)), while pure cannabidiol, a multitarget cannabinoid, is emerging as a treatment for paediatric drug-resistant epilepsy. Using emerging cannabinoid medicines as an example, we revisit the concept of polypharmacology and describe a new empirical model, the 'therapeutic handshake', to predict efficacy/safety of compound combinations of either natural or synthetic origin. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gao, Fengxiang; Mahoney, Jennifer C; Daly, Elizabeth R; Lamothe, Wendy; Tullo, Daniel; Bean, Christine
2014-01-01
A multitarget real-time PCR assay with three targets, including insertion sequence 481 (IS481), IS1001, and an IS1001-like element, as well as pertussis toxin subunit S1 (ptxS1), for the detection of Bordetella species was evaluated during a pertussis outbreak. The sensitivity and specificity were 77 and 88% (PCR) and 66 and 100% (culture), respectively. All patients with an IS481 C(T) of <30 also tested positive by ptxS1 assay and were clinical pertussis cases. No patients with IS481 C(T) values of ≥40 tested positive by culture. Therefore, we recommend that culture be performed only for specimens with IS481 C(T) values of 30 ≤ CT <40.
Mahoney, Jennifer C.; Daly, Elizabeth R.; Lamothe, Wendy; Tullo, Daniel; Bean, Christine
2014-01-01
A multitarget real-time PCR assay with three targets, including insertion sequence 481 (IS481), IS1001, and an IS1001-like element, as well as pertussis toxin subunit S1 (ptxS1), for the detection of Bordetella species was evaluated during a pertussis outbreak. The sensitivity and specificity were 77 and 88% (PCR) and 66 and 100% (culture), respectively. All patients with an IS481 CT of <30 also tested positive by ptxS1 assay and were clinical pertussis cases. No patients with IS481 CT values of ≥40 tested positive by culture. Therefore, we recommend that culture be performed only for specimens with IS481 CT values of 30 ≤ CT <40. PMID:24131698
Hybrid Compounds as Anti-infective Agents.
Sbaraglini, María Laura; Talevi, Alan
2017-01-01
Hybrid drugs are multi-target chimeric chemicals combining two or more drugs or pharmacophores covalently linked in a single molecule. In the field of anti-infective agents, they have been proposed as a possible solution to drug resistance issues, presumably having a broader spectrum of activity and less probability of eliciting high level resistance linked to single gene product. Although less frequently explored, they could also be useful in the treatment of frequently occurring co-infections. Here, we overview recent advances in the field of hybrid antimicrobials. Furthermore, we discuss some cutting-edge approaches to face the development of designed multi-target agents in the era of omics and big data, namely analysis of gene signatures and multitask QSAR models. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Optimal path planning for video-guided smart munitions via multitarget tracking
NASA Astrophysics Data System (ADS)
Borkowski, Jeffrey M.; Vasquez, Juan R.
2006-05-01
An advent in the development of smart munitions entails autonomously modifying target selection during flight in order to maximize the value of the target being destroyed. A unique guidance law can be constructed that exploits both attribute and kinematic data obtained from an onboard video sensor. An optimal path planning algorithm has been developed with the goals of obstacle avoidance and maximizing the value of the target impacted by the munition. Target identification and classification provides a basis for target value which is used in conjunction with multi-target tracks to determine an optimal waypoint for the munition. A dynamically feasible trajectory is computed to provide constraints on the waypoint selection. Results demonstrate the ability of the autonomous system to avoid moving obstacles and revise target selection in flight.
Geerts, Hugo; Kennis, Ludo
2014-01-01
Clinical development in brain diseases has one of the lowest success rates in the pharmaceutical industry, and many promising rationally designed single-target R&D projects fail in expensive Phase III trials. By contrast, successful older CNS drugs do have a rich pharmacology. This article will provide arguments suggesting that highly selective single-target drugs are not sufficiently powerful to restore complex neuronal circuit homeostasis. A rationally designed multitarget project can be derisked by dialing in an additional symptomatic treatment effect on top of a disease modification target. Alternatively, we expand upon a hypothetical workflow example using a humanized computer-based quantitative systems pharmacology platform. The hope is that incorporating rationally multipharmacology drug discovery could potentially lead to more impactful polypharmacy drugs.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags
NASA Astrophysics Data System (ADS)
Meng, S.; Xie, X.
2015-12-01
In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.
A Knowledge-Based Approach to Information Fusion for the Support of Military Intelligence
2004-03-01
and most reliable an appropriate picture of the battlespace. The presented approach of knowledge based information fusion is focussing on the...incomplete and imperfect information of military reports and background knowledge can be supported substantially in an automated system. Keywords
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
One decade of the Data Fusion Information Group (DFIG) model
NASA Astrophysics Data System (ADS)
Blasch, Erik
2015-05-01
The revision of the Joint Directors of the Laboratories (JDL) Information Fusion model in 2004 discussed information processing, incorporated the analyst, and was coined the Data Fusion Information Group (DFIG) model. Since that time, developments in information technology (e.g., cloud computing, applications, and multimedia) have altered the role of the analyst. Data production has outpaced the analyst; however the analyst still has the role of data refinement and information reporting. In this paper, we highlight three examples being addressed by the DFIG model. One example is the role of the analyst to provide semantic queries (through an ontology) so that vast amount of data available can be indexed, accessed, retrieved, and processed. The second idea is reporting which requires the analyst to collect the data into a condensed and meaningful form through information management. The last example is the interpretation of the resolved information from data that must include contextual information not inherent in the data itself. Through a literature review, the DFIG developments in the last decade demonstrate the usability of the DFIG model to bring together the user (analyst or operator) and the machine (information fusion or manager) in a systems design.
NASA Astrophysics Data System (ADS)
Ortuso, Francesco; Bagetta, Donatella; Maruca, Annalisa; Talarico, Carmine; Bolognesi, Maria L.; Haider, Norbert; Borges, Fernanda; Bryant, Sharon; Langer, Thierry; Senderowitz, Hanoch; Alcaro, Stefano
2018-04-01
Abstract For every lead compound developed in medicinal chemistry research, numerous other inactive or less active candidates are synthetized/isolated and tested. The majority of these compounds will not be selected for further development due to a sub-optimal pharmacological profile. However, some poorly active or even inactive compounds could live a second life if tested against other targets. Thus, new therapeutic opportunities could emerge and synergistic activities could be identified and exploited for existing compounds by sharing information between researchers who are working on different targets. The Mu.Ta.Lig (Multi-Target Ligand) Chemotheca database aims to offer such opportunities by facilitating information exchange among researchers worldwide. After a preliminary registration, users can (a) virtually upload structures and activity data for their compounds with corresponding, and eventually known activity data, and (b) search for other available compounds uploaded by the users community. Each piece of information about given compounds is owned by the user who initially uploaded it and multiple ownership is possible (occurs if different users uploaded the same compounds or information pertaining to the same compounds). A web-based graphical user interface has been developed to assist compound uploading, compounds searching and data retrieval. Physico-chemical and ADME properties as well as substructure-based PAINS evaluations are computed on the fly for each uploaded compound. Samples of compounds that match a set of search criteria and additional data on these compounds could be requested directly from their owners with no mediation by the Mu.Ta.Lig Chemotheca team. Guest access provides a simplified search interface to retrieve only basic information such as compound IDs and related 2D or 3D chemical structures. Moreover, some compounds can be hidden from Guest users according to an owner’s decision. In contrast, registered users have full access to all of the Chemotheca data including the permission to upload new compounds and/or update experimental/theoretical data (e.g., activities against new targets tested) related to already stored compounds. In order to facilitate scientific collaborations, all available data are connected to the corresponding owner’s email address (available for registered users only). The Chemotheca web site is accessible at http://chemotheca.unicz.it.
Ortuso, Francesco; Bagetta, Donatella; Maruca, Annalisa; Talarico, Carmine; Bolognesi, Maria L; Haider, Norbert; Borges, Fernanda; Bryant, Sharon; Langer, Thierry; Senderowitz, Hanoch; Alcaro, Stefano
2018-01-01
For every lead compound developed in medicinal chemistry research, numerous other inactive or less active candidates are synthetized/isolated and tested. The majority of these compounds will not be selected for further development due to a sub-optimal pharmacological profile. However, some poorly active or even inactive compounds could live a second life if tested against other targets. Thus, new therapeutic opportunities could emerge and synergistic activities could be identified and exploited for existing compounds by sharing information between researchers who are working on different targets. The Mu.Ta.Lig (Multi-Target Ligand) Chemotheca database aims to offer such opportunities by facilitating information exchange among researchers worldwide. After a preliminary registration, users can (a) virtually upload structures and activity data for their compounds with corresponding, and eventually known activity data, and (b) search for other available compounds uploaded by the users community. Each piece of information about given compounds is owned by the user who initially uploaded it and multiple ownership is possible (this occurs if different users uploaded the same compounds or information pertaining to the same compounds). A web-based graphical user interface has been developed to assist compound uploading, compounds searching and data retrieval. Physico-chemical and ADME properties as well as substructure-based PAINS evaluations are computed on the fly for each uploaded compound. Samples of compounds that match a set of search criteria and additional data on these compounds could be requested directly from their owners with no mediation by the Mu.Ta.Lig Chemotheca team. Guest access provides a simplified search interface to retrieve only basic information such as compound IDs and related 2D or 3D chemical structures. Moreover, some compounds can be hidden to Guest users according to an owner's decision. In contrast, registered users have full access to all of the Chemotheca data including the permission to upload new compounds and/or update experimental/theoretical data (e.g., activities against new targets tested) related to already stored compounds. In order to facilitate scientific collaborations, all available data are connected to the corresponding owner's email address (available for registered users only). The Chemotheca web site is accessible at http://chemotheca.unicz.it.
Incorporating Target Priorities in the Sensor Tasking Reward Function
NASA Astrophysics Data System (ADS)
Gehly, S.; Bennett, J.
2016-09-01
Orbital debris tracking poses many challenges, most fundamentally the need to track a large number of objects from a limited number of sensors. The use of information theoretic sensor allocation provides a means to efficiently collect data on the multitarget system. An additional need of the community is the ability to specify target priorities, driven both by user needs and environmental factors such as collision warnings. This research develops a method to incorporate target priorities in the sensor tasking reward function, allowing for several applications in different tasking modes such as catalog maintenance, calibration, and collision monitoring. A set of numerical studies is included to demonstrate the functionality of the method.
Physics-based and human-derived information fusion for analysts
NASA Astrophysics Data System (ADS)
Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael
2017-05-01
Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.
Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter.
Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei
2016-11-02
Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system's error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts.
Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter
Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei
2016-01-01
Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system’s error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts. PMID:27827832
FARE-CAFE: a database of functional and regulatory elements of cancer-associated fusion events.
Korla, Praveen Kumar; Cheng, Jack; Huang, Chien-Hung; Tsai, Jeffrey J P; Liu, Yu-Hsuan; Kurubanjerdjit, Nilubon; Hsieh, Wen-Tsong; Chen, Huey-Yi; Ng, Ka-Lok
2015-01-01
Chromosomal translocation (CT) is of enormous clinical interest because this disorder is associated with various major solid tumors and leukemia. A tumor-specific fusion gene event may occur when a translocation joins two separate genes. Currently, various CT databases provide information about fusion genes and their genomic elements. However, no database of the roles of fusion genes, in terms of essential functional and regulatory elements in oncogenesis, is available. FARE-CAFE is a unique combination of CTs, fusion proteins, protein domains, domain-domain interactions, protein-protein interactions, transcription factors and microRNAs, with subsequent experimental information, which cannot be found in any other CT database. Genomic DNA information including, for example, manually collected exact locations of the first and second break points, sequences and karyotypes of fusion genes are included. FARE-CAFE will substantially facilitate the cancer biologist's mission of elucidating the pathogenesis of various types of cancer. This database will ultimately help to develop 'novel' therapeutic approaches. Database URL: http://ppi.bioinfo.asia.edu.tw/FARE-CAFE. © The Author(s) 2015. Published by Oxford University Press.
FARE-CAFE: a database of functional and regulatory elements of cancer-associated fusion events
Korla, Praveen Kumar; Cheng, Jack; Huang, Chien-Hung; Tsai, Jeffrey J. P.; Liu, Yu-Hsuan; Kurubanjerdjit, Nilubon; Hsieh, Wen-Tsong; Chen, Huey-Yi; Ng, Ka-Lok
2015-01-01
Chromosomal translocation (CT) is of enormous clinical interest because this disorder is associated with various major solid tumors and leukemia. A tumor-specific fusion gene event may occur when a translocation joins two separate genes. Currently, various CT databases provide information about fusion genes and their genomic elements. However, no database of the roles of fusion genes, in terms of essential functional and regulatory elements in oncogenesis, is available. FARE-CAFE is a unique combination of CTs, fusion proteins, protein domains, domain–domain interactions, protein–protein interactions, transcription factors and microRNAs, with subsequent experimental information, which cannot be found in any other CT database. Genomic DNA information including, for example, manually collected exact locations of the first and second break points, sequences and karyotypes of fusion genes are included. FARE-CAFE will substantially facilitate the cancer biologist’s mission of elucidating the pathogenesis of various types of cancer. This database will ultimately help to develop ‘novel’ therapeutic approaches. Database URL: http://ppi.bioinfo.asia.edu.tw/FARE-CAFE PMID:26384373
Design of a multisensor data fusion system for target detection
NASA Astrophysics Data System (ADS)
Thomopoulos, Stelios C.; Okello, Nickens N.; Kadar, Ivan; Lovas, Louis A.
1993-09-01
The objective of this paper is to discuss the issues that are involved in the design of a multisensor fusion system and provide a systematic analysis and synthesis methodology for the design of the fusion system. The system under consideration consists of multifrequency (similar) radar sensors. However, the fusion design must be flexible to accommodate additional dissimilar sensors such as IR, EO, ESM, and Ladar. The motivation for the system design is the proof of the fusion concept for enhancing the detectability of small targets in clutter. In the context of down-selecting the proper configuration for multisensor (similar and dissimilar, and centralized vs. distributed) data fusion, the issues of data modeling, fusion approaches, and fusion architectures need to be addressed for the particular application being considered. Although the study of different approaches may proceed in parallel, the interplay among them is crucial in selecting a fusion configuration for a given application. The natural sequence for addressing the three different issues is to begin from the data modeling, in order to determine the information content of the data. This information will dictate the appropriate fusion approach. This, in turn, will lead to a global fusion architecture. Both distributed and centralized fusion architectures are used to illustrate the design issues along with Monte-Carlo simulation performance comparison of a single sensor versus a multisensor centrally fused system.
Data fusion for delivering advanced traveler information services
DOT National Transportation Integrated Search
2003-05-01
Many transportation professionals have suggested that improved ATIS data fusion techniques and processing will improve the overall quality, timeliness, and usefulness of traveler information. The purpose of this study was four fold. First, conduct a ...
Straube, Andreas; Aicher, Bernhard; Fiebich, Bernd L; Haag, Gunther
2011-03-31
Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix") are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics. In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect.As an example the fixed-dose combination of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy. Multitarget therapeutics like combined analgesics broaden the array of therapeutic options, enable the completeness of the therapeutic effect, and allow doctors (and, in self-medication with OTC medications, the patients themselves) to customize treatment to the patient's specific needs. There is substantial clinical evidence that such a multi-component therapy is more effective than mono-component therapies.
2011-01-01
Background Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix") are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics. Discussion In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect. As an example the fixesd-dose combination of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy. Summary Multitarget therapeutics like combined analgesics broaden the array of therapeutic options, enable the completeness of the therapeutic effect, and allow doctors (and, in self-medication with OTC medications, the patients themselves) to customize treatment to the patient's specific needs. There is substantial clinical evidence that such a multi-component therapy is more effective than mono-component therapies. PMID:21453539
Multisource Data Classification Using A Hybrid Semi-supervised Learning Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L; Shekhar, Shashi
2009-01-01
In many practical situations thematic classes can not be discriminated by spectral measurements alone. Often one needs additional features such as population density, road density, wetlands, elevation, soil types, etc. which are discrete attributes. On the other hand remote sensing image features are continuous attributes. Finding a suitable statistical model and estimation of parameters is a challenging task in multisource (e.g., discrete and continuous attributes) data classification. In this paper we present a semi-supervised learning method by assuming that the samples were generated by a mixture model, where each component could be either a continuous or discrete distribution. Overall classificationmore » accuracy of the proposed method is improved by 12% in our initial experiments.« less
WHO Expert Committee on Specifications for Pharmaceutical Preparations.
2012-01-01
The Expert Committee on Specifications for Pharmaceutical Preparations works towards clear, independent and practical standards and guidelines for the quality assurance of medicines. Standards are developed by the Committee through worldwide consultation and an international consensus-building process. The following new guidelines were adopted and recommended for use: Development of monographs for The International Pharmacopoeia; WHO good manufacturing practices: water for pharmaceutical use; Pharmaceutical development of multisource (generic) pharmaceutical products--points to consider; Guidelines on submission of documentation for a multisource (generic) finished pharmaceutical product for the WHO Prequalification of Medicines Programme: quality part; Development of paediatric medicines: points to consider in formulation; Recommendations for quality requirements for artemisinin as a starting material in the production of antimalarial active pharmaceutical ingredients.
Evaluation of the Maximum Allowable Cost Program
Lee, A. James; Hefner, Dennis; Dobson, Allen; Hardy, Ralph
1983-01-01
This article summarizes an evaluation of the Maximum Allowable Cost (MAC)-Estimated Acquisition Cost (EAC) program, the Federal Government's cost-containment program for prescription drugs.1 The MAC-EAC regulations which became effective on August 26, 1976, have four major components: (1) Maximum Allowable Cost reimbursement limits for selected multisource or generically available drugs; (2) Estimated Acquisition Cost reimbursement limits for all drugs; (3) “usual and customary” reimbursement limits for all drugs; and (4) a directive that professional fee studies be performed by each State. The study examines the benefits and costs of the MAC reimbursement limits for 15 dosage forms of five multisource drugs and EAC reimbursement limits for all drugs for five selected States as of 1979. PMID:10309857
Path to bio-nano-information fusion.
Chen, Jia Ming; Ho, Chih-Ming
2006-12-01
This article will discuss the challenges in a new convergent discipline created by the fusion of biotechnology, nanotechnology, and information technology. To illustrate the research challenges, we will begin with an introduction to the nanometer-scale environment in which biology resides, and point out the many important behaviors of matters at that scale. Then we will describe an ideal model system, the cell, for bio-nano-information fusion. Our efforts in advancing this field at the Institute of Cell Mimetic Space Exploration (CMISE) will be introduced here as an example to move toward achieving this goal.
Combinatorial Fusion Analysis for Meta Search Information Retrieval
NASA Astrophysics Data System (ADS)
Hsu, D. Frank; Taksa, Isak
Leading commercial search engines are built as single event systems. In response to a particular search query, the search engine returns a single list of ranked search results. To find more relevant results the user must frequently try several other search engines. A meta search engine was developed to enhance the process of multi-engine querying. The meta search engine queries several engines at the same time and fuses individual engine results into a single search results list. The fusion of multiple search results has been shown (mostly experimentally) to be highly effective. However, the question of why and how the fusion should be done still remains largely unanswered. In this chapter, we utilize the combinatorial fusion analysis proposed by Hsu et al. to analyze combination and fusion of multiple sources of information. A rank/score function is used in the design and analysis of our framework. The framework provides a better understanding of the fusion phenomenon in information retrieval. For example, to improve the performance of the combined multiple scoring systems, it is necessary that each of the individual scoring systems has relatively high performance and the individual scoring systems are diverse. Additionally, we illustrate various applications of the framework using two examples from the information retrieval domain.
NASA Astrophysics Data System (ADS)
Snauffer, Andrew M.; Hsieh, William W.; Cannon, Alex J.; Schnorbus, Markus A.
2018-03-01
Estimates of surface snow water equivalent (SWE) in mixed alpine environments with seasonal melts are particularly difficult in areas of high vegetation density, topographic relief, and snow accumulations. These three confounding factors dominate much of the province of British Columbia (BC), Canada. An artificial neural network (ANN) was created using as predictors six gridded SWE products previously evaluated for BC. Relevant spatiotemporal covariates were also included as predictors, and observations from manual snow surveys at stations located throughout BC were used as target data. Mean absolute errors (MAEs) and interannual correlations for April surveys were found using cross-validation. The ANN using the three best-performing SWE products (ANN3) had the lowest mean station MAE across the province. ANN3 outperformed each product as well as product means and multiple linear regression (MLR) models in all of BC's five physiographic regions except for the BC Plains. Subsequent comparisons with predictions generated by the Variable Infiltration Capacity (VIC) hydrologic model found ANN3 to better estimate SWE over the VIC domain and within most regions. The superior performance of ANN3 over the individual products, product means, MLR, and VIC was found to be statistically significant across the province.
Cooperative angle-only orbit initialization via fusion of admissible areas
NASA Astrophysics Data System (ADS)
Jia, Bin; Pham, Khanh; Blasch, Erik; Chen, Genshe; Shen, Dan; Wang, Zhonghai
2017-05-01
For the short-arc angle only orbit initialization problem, the admissible area is often used. However, the accuracy using a single sensor is often limited. For high value space objects, it is desired to achieve more accurate results. Fortunately, multiple sensors, which are dedicated to space situational awareness, are available. The work in this paper uses multiple sensors' information to cooperatively initialize the orbit based on the fusion of multiple admissible areas. Both the centralized fusion and decentralized fusion are discussed. Simulation results verify the expectation that the orbit initialization accuracy is improved by using information from multiple sensors.
Information Fusion Issues in the UK Environmental Science Community
NASA Astrophysics Data System (ADS)
Giles, J. R.
2010-12-01
The Earth is a complex, interacting system which cannot be neatly divided by discipline boundaries. To gain an holistic understanding of even a component of an Earth System requires researchers to draw information from multiple disciplines and integrate these to develop a broader understanding. But the barriers to achieving this are formidable. Research funders attempting to encourage the integration of information across disciplines need to take into account culture issues, the impact of intrusion of projects on existing information systems, ontologies and semantics, scale issues, heterogeneity and the uncertainties associated with combining information from diverse sources. Culture - There is a cultural dualism in the environmental sciences were information sharing is both rewarded and discouraged. Researchers who share information both gain new opportunities and risk reducing their chances of being first author in an high-impact journal. The culture of the environmental science community has to be managed to ensure that information fusion activities are encouraged. Intrusion - Existing information systems have an inertia of there own because of the intellectual and financial capital invested within them. Information fusion activities must recognise and seek to minimise the potential impact of their projects on existing systems. Low intrusion information fusions systems such as OGC web-service and the OpenMI Standard are to be preferred to whole-sale replacement of existing systems. Ontology and Semantics - Linking information across disciplines requires a clear understanding of the concepts deployed in the vocabulary used to describe them. Such work is a critical first step to creating routine information fusion. It is essential that national bodies, such as geological surveys organisations, document and publish their ontologies, semantics, etc. Scale - Environmental processes operate at scales ranging from microns to the scale of the Solar System and potentially beyond. The many different scales involved provide serious challenges to information fusion which need to be researched. Heterogeneity - Natural systems are heterogeneous, that is a system consisting of multiple components each of which may have considerable internal variation. Modelling Earth Systems requires recognition of the inherent complexity. Uncertainty - Understanding the uncertainties within a single information source can be difficult. Understanding the uncertainties across a system of linked models, each drawn from multiple information resources, represents a considerable challenge that must be addressed. The challenges to overcome appear insurmountable to individual research groups; but the potential rewards, in terms of a fuller scientific understanding of Earth Systems, are significant. A major international effort must be mounted to tackle these barriers and enable routine information fusion.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
2018-01-01
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Homeland security application of the Army Soft Target Exploitation and Fusion (STEF) system
NASA Astrophysics Data System (ADS)
Antony, Richard T.; Karakowski, Joseph A.
2010-04-01
A fusion system that accommodates both text-based extracted information along with more conventional sensor-derived input has been developed and demonstrated in a terrorist attack scenario as part of the Empire Challenge (EC) 09 Exercise. Although the fusion system was developed to support Army military analysts, the system, based on a set of foundational fusion principles, has direct applicability to department of homeland security (DHS) & defense, law enforcement, and other applications. Several novel fusion technologies and applications were demonstrated in EC09. One such technology is location normalization that accommodates both fuzzy semantic expressions such as behind Library A, across the street from the market place, as well as traditional spatial representations. Additionally, the fusion system provides a range of fusion products not supported by traditional fusion algorithms. Many of these additional capabilities have direct applicability to DHS. A formal test of the fusion system was performed during the EC09 exercise. The system demonstrated that it was able to (1) automatically form tracks, (2) help analysts visualize behavior of individuals over time, (3) link key individuals based on both explicit message-based information as well as discovered (fusion-derived) implicit relationships, and (4) suggest possible individuals of interest based on their association with High Value Individuals (HVI) and user-defined key locations.
Personality Disorder Symptoms Are Differentially Related to Divorce Frequency
Disney, Krystle L.; Weinstein, Yana; Oltmanns, Thomas F.
2013-01-01
Divorce is associated with a multitude of outcomes related to health and well-being. Data from a representative community sample (N = 1,241) of St. Louis residents (ages 55–64) were used to examine associations between personality pathology and divorce in late midlife. Symptoms of the 10 DSM–IV personality disorders were assessed with the Structured Interview for DSM–IV Personality and the Multisource Assessment of Personality Pathology (both self and informant versions). Multiple regression analyses showed Paranoid and Histrionic personality disorder symptoms to be consistently and positively associated with number of divorces across all three sources of personality assessment. Conversely, Avoidant personality disorder symptoms were negatively associated with number of divorces. The present paper provides new information about the relationship between divorce and personality pathology at a developmental stage that is understudied in both domains. PMID:23244459
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J
2017-03-03
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.
2017-01-01
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796
Mid-course multi-target tracking using continuous representation
NASA Technical Reports Server (NTRS)
Zak, Michail; Toomarian, Nikzad
1991-01-01
The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.
Exploring Multitarget Interactions to Reduce Opiate Withdrawal Syndrome and Psychiatric Comorbidity
2013-01-01
Opioid addiction is often characterized as a chronic relapsing condition due to the severe somatic and behavioral signs, associated with depressive disorders, triggered by opiate withdrawal. Since prolonged abstinence remains a major challenge, our interest has been addressed to such objective. Exploring multitarget interactions, the present investigation suggests that 3 or its (S)-enantiomer and 4, endowed with effective α2C-AR agonism/α2A-AR antagonism/5-HT1A-R agonism, or 7 and 9–11 producing efficacious α2C-AR agonism/α2A-AR antagonism/I2–IBS interaction might represent novel multifunctional tools potentially useful for reducing withdrawal syndrome and associated depression. Such agents, lacking in sedative side effects due to their α2A-AR antagonism, might afford an improvement over current therapies with clonidine-like drugs. PMID:24900763
CADD Modeling of Multi-Target Drugs Against Alzheimer's Disease.
Ambure, Pravin; Roy, Kunal
2017-01-01
Alzheimer's disease (AD) is a neurodegenerative disorder that is described by multiple factors linked with the progression of the disease. The currently approved drugs in the market are not capable of curing AD; instead, they merely provide symptomatic relief. Development of multi-target directed ligands (MTDLs) is an emerging strategy for improving the quality of the treatment against complex diseases like AD. Polypharmacology is a branch of pharmaceutical sciences that deals with the MTDL development. In this mini-review, we have summarized and discussed different strategies that are reported in the literature to design MTDLs for AD. Further, we have discussed the role of different in silico techniques and online resources in computer-aided drug discovery (CADD), for designing or identifying MTDLs against AD. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Kretova, Olga V; Chechetkin, Vladimir R; Fedoseeva, Daria M; Kravatsky, Yuri V; Sosin, Dmitri V; Alembekov, Ildar R; Gorbacheva, Maria A; Gashnikova, Natalya M; Tchurikov, Nickolai A
2017-02-01
Any method for silencing the activity of the HIV-1 retrovirus should tackle the extremely high variability of HIV-1 sequences and mutational escape. We studied sequence variability in the vicinity of selected RNA interference (RNAi) targets from isolates of HIV-1 subtype A in Russia, and we propose that using artificial RNAi is a potential alternative to traditional antiretroviral therapy. We prove that using multiple RNAi targets overcomes the variability in HIV-1 isolates. The optimal number of targets critically depends on the conservation of the target sequences. The total number of targets that are conserved with a probability of 0.7-0.8 should exceed at least 2. Combining deep sequencing and multitarget RNAi may provide an efficient approach to cure HIV/AIDS.
Pi, Liqun; Li, Xiang; Cao, Yiwei; Wang, Canhua; Pan, Liangwen; Yang, Litao
2015-04-01
Reference materials are important in accurate analysis of genetically modified organism (GMO) contents in food/feeds, and development of novel reference plasmid is a new trend in the research of GMO reference materials. Herein, we constructed a novel multi-targeting plasmid, pSOY, which contained seven event-specific sequences of five GM soybeans (MON89788-5', A2704-12-3', A5547-127-3', DP356043-5', DP305423-3', A2704-12-5', and A5547-127-5') and sequence of soybean endogenous reference gene Lectin. We evaluated the specificity, limit of detection and quantification, and applicability of pSOY in both qualitative and quantitative PCR analyses. The limit of detection (LOD) was as low as 20 copies in qualitative PCR, and the limit of quantification (LOQ) in quantitative PCR was 10 copies. In quantitative real-time PCR analysis, the PCR efficiencies of all event-specific and Lectin assays were higher than 90%, and the squared regression coefficients (R(2)) were more than 0.999. The quantification bias varied from 0.21% to 19.29%, and the relative standard deviations were from 1.08% to 9.84% in simulated samples analysis. All the results demonstrated that the developed multi-targeting plasmid, pSOY, was a credible substitute of matrix reference materials, and could be used as a reliable reference calibrator in the identification and quantification of multiple GM soybean events.
Predicting targets of compounds against neurological diseases using cheminformatic methodology
NASA Astrophysics Data System (ADS)
Nikolic, Katarina; Mavridis, Lazaros; Bautista-Aguilera, Oscar M.; Marco-Contelles, José; Stark, Holger; do Carmo Carreiras, Maria; Rossi, Ilaria; Massarelli, Paola; Agbaba, Danica; Ramsay, Rona R.; Mitchell, John B. O.
2015-02-01
Recently developed multi-targeted ligands are novel drug candidates able to interact with monoamine oxidase A and B; acetylcholinesterase and butyrylcholinesterase; or with histamine N-methyltransferase and histamine H3-receptor (H3R). These proteins are drug targets in the treatment of depression, Alzheimer's disease, obsessive disorders, and Parkinson's disease. A probabilistic method, the Parzen-Rosenblatt window approach, was used to build a "predictor" model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Molecular structures were represented based on the circular fingerprint methodology. The same approach was used to build a "predictor" model from the DrugBank dataset to determine the main pharmacological groups of the compound. The study of off-target interactions is now recognised as crucial to the understanding of both drug action and toxicology. Primary pharmaceutical targets and off-targets for the novel multi-target ligands were examined by use of the developed cheminformatic method. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. The cheminformatic targets identifications were in agreement with four 3D-QSAR (H3R/D1R/D2R/5-HT2aR) models and by in vitro assays for serotonin 5-HT1a and 5-HT2a receptor binding of the most promising ligand ( 71/MBA-VEG8).
Capurro, Valeria; Busquet, Perrine; Lopes, Joao Pedro; Bertorelli, Rosalia; Tarozzo, Glauco; Bolognesi, Maria Laura; Piomelli, Daniele; Reggiani, Angelo; Cavalli, Andrea
2013-01-01
Alzheimer's disease (AD) is characterized by progressive loss of cognitive function, dementia and altered behavior. Over 30 million people worldwide suffer from AD and available therapies are still palliative rather than curative. Recently, Memoquin (MQ), a quinone-bearing polyamine compound, has emerged as a promising anti-AD lead candidate, mainly thanks to its multi-target profile. MQ acts as an acetylcholinesterase and β-secretase-1 inhibitor, and also possesses anti-amyloid and anti-oxidant properties. Despite this potential interest, in vivo behavioral studies with MQ have been limited. Here, we report on in vivo studies with MQ (acute and sub-chronic treatments; 7-15 mg/kg per os) carried out using two different mouse models: i) scopolamine- and ii) beta-amyloid peptide- (Aβ-) induced amnesia. Several aspects related to memory were examined using the T-maze, the Morris water maze, the novel object recognition, and the passive avoidance tasks. At the dose of 15 mg/kg, MQ was able to rescue all tested aspects of cognitive impairment including spatial, episodic, aversive, short and long-term memory in both scopolamine- and Aβ-induced amnesia models. Furthermore, when tested in primary cortical neurons, MQ was able to fully prevent the Aβ-induced neurotoxicity mediated by oxidative stress. The results support the effectiveness of MQ as a cognitive enhancer, and highlight the value of a multi-target strategy to address the complex nature of cognitive dysfunction in AD.
Capurro, Valeria; Busquet, Perrine; Lopes, Joao Pedro; Bertorelli, Rosalia; Tarozzo, Glauco; Bolognesi, Maria Laura; Piomelli, Daniele; Reggiani, Angelo; Cavalli, Andrea
2013-01-01
Alzheimer's disease (AD) is characterized by progressive loss of cognitive function, dementia and altered behavior. Over 30 million people worldwide suffer from AD and available therapies are still palliative rather than curative. Recently, Memoquin (MQ), a quinone-bearing polyamine compound, has emerged as a promising anti-AD lead candidate, mainly thanks to its multi-target profile. MQ acts as an acetylcholinesterase and β-secretase-1 inhibitor, and also possesses anti-amyloid and anti-oxidant properties. Despite this potential interest, in vivo behavioral studies with MQ have been limited. Here, we report on in vivo studies with MQ (acute and sub-chronic treatments; 7–15 mg/kg per os) carried out using two different mouse models: i) scopolamine- and ii) beta-amyloid peptide- (Aβ-) induced amnesia. Several aspects related to memory were examined using the T-maze, the Morris water maze, the novel object recognition, and the passive avoidance tasks. At the dose of 15 mg/kg, MQ was able to rescue all tested aspects of cognitive impairment including spatial, episodic, aversive, short and long-term memory in both scopolamine- and Aβ-induced amnesia models. Furthermore, when tested in primary cortical neurons, MQ was able to fully prevent the Aβ-induced neurotoxicity mediated by oxidative stress. The results support the effectiveness of MQ as a cognitive enhancer, and highlight the value of a multi-target strategy to address the complex nature of cognitive dysfunction in AD. PMID:23441223
NASA Astrophysics Data System (ADS)
Basant, Nikita; Gupta, Shikha
2018-03-01
The reactions of molecular ozone (O3), hydroxyl (•OH) and nitrate (NO3) radicals are among the major pathways of removal of volatile organic compounds (VOCs) in the atmospheric environment. The gas-phase kinetic rate constants (kO3, kOH, kNO3) are thus, important in assessing the ultimate fate and exposure risk of atmospheric VOCs. Experimental data for rate constants are not available for many emerging VOCs and the computational methods reported so far address a single target modeling only. In this study, we have developed a multi-target (mt) QSPR model for simultaneous prediction of multiple kinetic rate constants (kO3, kOH, kNO3) of diverse organic chemicals considering an experimental data set of VOCs for which values of all the three rate constants are available. The mt-QSPR model identified and used five descriptors related to the molecular size, degree of saturation and electron density in a molecule, which were mechanistically interpretable. These descriptors successfully predicted three rate constants simultaneously. The model yielded high correlations (R2 = 0.874-0.924) between the experimental and simultaneously predicted endpoint rate constant (kO3, kOH, kNO3) values in test arrays for all the three systems. The model also passed all the stringent statistical validation tests for external predictivity. The proposed multi-target QSPR model can be successfully used for predicting reactivity of new VOCs simultaneously for their exposure risk assessment.
Multitarget transcranial direct current stimulation for freezing of gait in Parkinson's disease.
Dagan, Moria; Herman, Talia; Harrison, Rachel; Zhou, Junhong; Giladi, Nir; Ruffini, Giulio; Manor, Brad; Hausdorff, Jeffrey M
2018-04-01
Recent findings suggest that transcranial direct current stimulation of the primary motor cortex may ameliorate freezing of gait. However, the effects of multitarget simultaneous stimulation of motor and cognitive networks are mostly unknown. The objective of this study was to evaluate the effects of multitarget transcranial direct current stimulation of the primary motor cortex and left dorsolateral prefrontal cortex on freezing of gait and related outcomes. Twenty patients with Parkinson's disease and freezing of gait received 20 minutes of transcranial direct current stimulation on 3 separate visits. Transcranial direct current stimulation targeted the primary motor cortex and left dorsolateral prefrontal cortex simultaneously, primary motor cortex only, or sham stimulation (order randomized and double-blinded assessments). Participants completed a freezing of gait-provoking test, the Timed Up and Go, and the Stroop test before and after each transcranial direct current stimulation session. Performance on the freezing of gait-provoking test (P = 0.010), Timed Up and Go (P = 0.006), and the Stroop test (P = 0.016) improved after simultaneous stimulation of the primary motor cortex and left dorsolateral prefrontal cortex, but not after primary motor cortex only or sham stimulation. Transcranial direct current stimulation designed to simultaneously target motor and cognitive regions apparently induces immediate aftereffects in the brain that translate into reduced freezing of gait and improvements in executive function and mobility. © 2018 International Parkinson and Movement Disorder Society. © 2018 International Parkinson and Movement Disorder Society.
A local approach for focussed Bayesian fusion
NASA Astrophysics Data System (ADS)
Sander, Jennifer; Heizmann, Michael; Goussev, Igor; Beyerer, Jürgen
2009-04-01
Local Bayesian fusion approaches aim to reduce high storage and computational costs of Bayesian fusion which is separated from fixed modeling assumptions. Using the small world formalism, we argue why this proceeding is conform with Bayesian theory. Then, we concentrate on the realization of local Bayesian fusion by focussing the fusion process solely on local regions that are task relevant with a high probability. The resulting local models correspond then to restricted versions of the original one. In a previous publication, we used bounds for the probability of misleading evidence to show the validity of the pre-evaluation of task specific knowledge and prior information which we perform to build local models. In this paper, we prove the validity of this proceeding using information theoretic arguments. For additional efficiency, local Bayesian fusion can be realized in a distributed manner. Here, several local Bayesian fusion tasks are evaluated and unified after the actual fusion process. For the practical realization of distributed local Bayesian fusion, software agents are predestinated. There is a natural analogy between the resulting agent based architecture and criminal investigations in real life. We show how this analogy can be used to improve the efficiency of distributed local Bayesian fusion additionally. Using a landscape model, we present an experimental study of distributed local Bayesian fusion in the field of reconnaissance, which highlights its high potential.
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
Zhang, Meng-Yu; Wang, Jie-Ping
2017-04-01
The abilities to escape apoptosis induced by anticancer drugs are an essential factor of carcinogenesis and a hallmark of resistance to cancer therapy. In this study, we identified hTERTR-FAM96A (human telomerase reverse transcriptase-family with sequence similarity 96 member A) as a new efficient agent for apoptosome-activating and anti-tumor protein and investigated the potential tumor suppressor function in hepatocellular carcinoma. The hTERTR-FAM96A fusion protein was constructed by genetic engineering and its anticancer function of hTERTR-FAM96A was explored in vitro and in vivo by investigating the possible preclinical outcomes. Effects of hTERTR-FAM96A on improvement of apoptotic sensitivity and inhibition of migration and invasion were examined in cancer cells and tumors. Our results showed that the therapeutic effects of hTERTR-FAM96A were highly effective for inhibiting tumor growth and inducing apoptosis of hepatocellular carcinoma cells in H22-bearing nude mice. The hTERTR-FAM96A fusion protein could specifically bind with Apaf-1 and hTERT, which further induced apoptosis of hepatocellular carcinoma cells and improved apoptosis sensitivity. Our results indicated that hTERTR-FAM96A treatment enhanced cytotoxic effects by upregulation of cytotoxic T lymphocyte responses, interferon-γ release, and T lymphocyte infiltration. In addition, hTERTR-FAM96A led to tumor-specific immunologic cytotoxicity through increasing apoptotic body on hepatocellular tumors. Furthermore, hTERTR-FAM96A dramatically inhibited tumor growth, reduced death rate, and prolonged mice survival in hepatocellular carcinoma mice derived from three independent hepatocellular carcinoma mice cohorts compared to control groups. In summary, our data suggest that hTERTR-FAM96A may serve as an efficient anti-tumor agent for the treatment of hepatocellular carcinoma.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
NASA Technical Reports Server (NTRS)
Brooks, Colin; Bourgeau-Chavez, Laura; Endres, Sarah; Battaglia, Michael; Shuchman, Robert
2015-01-01
Assist with the evaluation and measuring of wetlands hydroperiod at the Plum Brook Station using multi-source remote sensing data as part of a larger effort on projecting climate change-related impacts on the station's wetland ecosystems. MTRI expanded on the multi-source remote sensing capabilities to help estimate and measure hydroperiod and the relative soil moisture of wetlands at NASA's Plum Brook Station. Multi-source remote sensing capabilities are useful in estimating and measuring hydroperiod and relative soil moisture of wetlands. This is important as a changing regional climate has several potential risks for wetland ecosystem function. The year two analysis built on the first year of the project by acquiring and analyzing remote sensing data for additional dates and types of imagery, combined with focused field work. Five deliverables were planned and completed: (1) Show the relative length of hydroperiod using available remote sensing datasets, (2) Date linked table of wetlands extent over time for all feasible non-forested wetlands, (3) Utilize LIDAR data to measure topographic height above sea level of all wetlands, wetland to catchment area radio, slope of wetlands, and other useful variables (4), A demonstration of how analyzed results from multiple remote sensing data sources can help with wetlands vulnerability assessment; and (5) A MTRI style report summarizing year 2 results.
Multi-source energy harvester to power sensing hardware on rotating structures
NASA Astrophysics Data System (ADS)
Schlichting, Alexander; Ouellette, Scott; Carlson, Clinton; Farinholt, Kevin M.; Park, Gyuhae; Farrar, Charles R.
2010-04-01
The U.S. Department of Energy (DOE) proposes to meet 20% of the nation's energy needs through wind power by the year 2030. To accomplish this goal, the industry will need to produce larger (>100m diameter) turbines to increase efficiency and maximize energy production. It will be imperative to instrument the large composite structures with onboard sensing to provide structural health monitoring capabilities to understand the global response and integrity of these systems as they age. A critical component in the deployment of such a system will be a robust power source that can operate for the lifespan of the wind turbine. In this paper we consider the use of discrete, localized power sources that derive energy from the ambient (solar, thermal) or operational (kinetic) environment. This approach will rely on a multi-source configuration that scavenges energy from photovoltaic and piezoelectric transducers. Each harvester is first characterized individually in the laboratory and then they are combined through a multi-source power conditioner that is designed to combine the output of each harvester in series to power a small wireless sensor node that has active-sensing capabilities. The advantages/disadvantages of each approach are discussed, along with the proposed design for a field ready energy harvester that will be deployed on a small-scale 19.8m diameter wind turbine.
Modeling multi-source flooding disaster and developing simulation framework in Delta
NASA Astrophysics Data System (ADS)
Liu, Y.; Cui, X.; Zhang, W.
2016-12-01
Most Delta regions of the world are densely populated and with advanced economies. However, due to impact of the multi-source flooding (upstream flood, rainstorm waterlogging, storm surge flood), the Delta regions is very vulnerable. The academic circles attach great importance to the multi-source flooding disaster in these areas. The Pearl River Delta urban agglomeration in south China is selected as the research area. Based on analysis of natural and environmental characteristics data of the Delta urban agglomeration(remote sensing data, land use data, topographic map, etc.), hydrological monitoring data, research of the uneven distribution and process of regional rainfall, the relationship between the underlying surface and the parameters of runoff, effect of flood storage pattern, we use an automatic or semi-automatic method for dividing spatial units to reflect the runoff characteristics in urban agglomeration, and develop an Multi-model Ensemble System in changing environment, including urban hydrologic model, parallel computational 1D&2D hydrodynamic model, storm surge forecast model and other professional models, the system will have the abilities like real-time setting a variety of boundary conditions, fast and real-time calculation, dynamic presentation of results, powerful statistical analysis function. The model could be optimized and improved by a variety of verification methods. This work was supported by the National Natural Science Foundation of China (41471427); Special Basic Research Key Fund for Central Public Scientific Research Institutes.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
Analysis of flood inundation in ungauged basins based on multi-source remote sensing data.
Gao, Wei; Shen, Qiu; Zhou, Yuehua; Li, Xin
2018-02-09
Floods are among the most expensive natural hazards experienced in many places of the world and can result in heavy losses of life and economic damages. The objective of this study is to analyze flood inundation in ungauged basins by performing near-real-time detection with flood extent and depth based on multi-source remote sensing data. Via spatial distribution analysis of flood extent and depth in a time series, the inundation condition and the characteristics of flood disaster can be reflected. The results show that the multi-source remote sensing data can make up the lack of hydrological data in ungauged basins, which is helpful to reconstruct hydrological sequence; the combination of MODIS (moderate-resolution imaging spectroradiometer) surface reflectance productions and the DFO (Dartmouth Flood Observatory) flood database can achieve the macro-dynamic monitoring of the flood inundation in ungauged basins, and then the differential technique of high-resolution optical and microwave images before and after floods can be used to calculate flood extent to reflect spatial changes of inundation; the monitoring algorithm for the flood depth combining RS and GIS is simple and easy and can quickly calculate the depth with a known flood extent that is obtained from remote sensing images in ungauged basins. Relevant results can provide effective help for the disaster relief work performed by government departments.
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
Moen, Spencer O.; Smith, Eric; Raymond, Amy C.; Fairman, James W.; Stewart, Lance J.; Staker, Bart L.; Begley, Darren W.; Edwards, Thomas E.; Lorimer, Donald D.
2013-01-01
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year 1. Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans 2. Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains. PMID:23851357
Comparison and evaluation of fusion methods used for GF-2 satellite image in coastal mangrove area
NASA Astrophysics Data System (ADS)
Ling, Chengxing; Ju, Hongbo; Liu, Hua; Zhang, Huaiqing; Sun, Hua
2018-04-01
GF-2 satellite is the highest spatial resolution Remote Sensing Satellite of the development history of China's satellite. In this study, three traditional fusion methods including Brovey, Gram-Schmidt and Color Normalized (CN were used to compare with the other new fusion method NNDiffuse, which used the qualitative assessment and quantitative fusion quality index, including information entropy, variance, mean gradient, deviation index, spectral correlation coefficient. Analysis results show that NNDiffuse method presented the optimum in qualitative and quantitative analysis. It had more effective for the follow up of remote sensing information extraction and forest, wetland resources monitoring applications.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Interagency and Multinational Information Sharing Architecture and Solutions (IMISAS) Project
2012-02-01
Defense (DOD) Enterprise Unclassified Information Sharing Service, August 10, 2010 12 Lindenmayer, Martin J. Civil Information and Intelligence Fusion...Organizations (1:2), 44-65. Lindenmayer, Martin J. Civil Information and Intelligence Fusion: Making “Non-Traditional” into “New Traditional” for...perceived as a good start which needs more development. References [Badke-Schaub et al. 2008] Badke-Schaub, Petra ; Hofinger, Gesine; Lauche
Adoptive T cell cancer therapy
NASA Astrophysics Data System (ADS)
Dzhandzhugazyan, Karine N.; Guldberg, Per; Kirkin, Alexei F.
2018-06-01
Tumour heterogeneity and off-target toxicity are current challenges of cancer immunotherapy. Karine Dzhandzhugazyan, Per Guldberg and Alexei Kirkin discuss how epigenetic induction of tumour antigens in antigen-presenting cells may form the basis for multi-target therapies.
A novel framework for command and control of networked sensor systems
NASA Astrophysics Data System (ADS)
Chen, Genshe; Tian, Zhi; Shen, Dan; Blasch, Erik; Pham, Khanh
2007-04-01
In this paper, we have proposed a highly innovative advanced command and control framework for sensor networks used for future Integrated Fire Control (IFC). The primary goal is to enable and enhance target detection, validation, and mitigation for future military operations by graphical game theory and advanced knowledge information fusion infrastructures. The problem is approached by representing distributed sensor and weapon systems as generic warfare resources which must be optimized in order to achieve the operational benefits afforded by enabling a system of systems. This paper addresses the importance of achieving a Network Centric Warfare (NCW) foundation of information superiority-shared, accurate, and timely situational awareness upon which advanced automated management aids for IFC can be built. The approach uses the Data Fusion Information Group (DFIG) Fusion hierarchy of Level 0 through Level 4 to fuse the input data into assessments for the enemy target system threats in a battlespace to which military force is being applied. Compact graph models are employed across all levels of the fusion hierarchy to accomplish integrative data fusion and information flow control, as well as cross-layer sensor management. The functional block at each fusion level will have a set of innovative algorithms that not only exploit the corresponding graph model in a computationally efficient manner, but also permit combined functional experiments across levels by virtue of the unifying graphical model approach.
Wang, Qi; Xie, Zhiyi; Li, Fangbai
2015-11-01
This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gomez, Rapson; Burns, G Leonard; Walsh, James A; Hafetz, Nina
2005-04-01
Confirmatory factor analysis (CFA) was used to model a multitrait by multisource matrix to determine the convergent and discriminant validity of measures of attention-deficit hyperactivity disorder (ADHD)-inattention (IN), ADHD-hyperactivity/impulsivity (HI), and oppositional defiant disorder (ODD) in 917 Malaysian elementary school children. The three trait factors were ADHD-IN, ADHDHI, and ODD. The two source factors were parents and teachers. Similar to earlier studies with Australian and Brazilian children, the parent and teacher measures failed to show convergent and discriminant validity with Malaysian children. The study outlines the implications of such strong source effects in ADHD-IN, ADHD-HI, and ODD measures for the use of such parent and teacher scales to study the symptom dimensions.
A comparison of synthesis and integrative approaches for meaning making and information fusion
NASA Astrophysics Data System (ADS)
Eggleston, Robert G.; Fenstermacher, Laurie
2017-05-01
Traditionally, information fusion approaches to meaning making have been integrative or aggregative in nature, creating meaning "containers" in which to put content (e.g., attributes) about object classes. In a large part, this was due to the limits in technology/tools for supporting information fusion (e.g., computers). A different synthesis based approach for meaning making is described which takes advantage of computing advances. The approach is not focused on the events/behaviors being observed/sensed; instead, it is human work centric. The former director of the Defense Intelligence Agency once wrote, "Context is king. Achieving an understanding of what is happening - or will happen - comes from a truly integrated picture of an area, the situation and the various personalities in it…a layered approach over time that builds depth of understanding."1 The synthesis based meaning making framework enables this understanding. It is holistic (both the sum and the parts, the proverbial forest and the trees), multi-perspective and emulative (as opposed to representational). The two approaches are complementary, with the synthesis based meaning making framework as a wrapper. The integrative approach would be dominant at level 0,1 fusion: data fusion, track formation and the synthesis based meaning making becomes dominant at higher fusion levels (levels 2 and 3), although both may be in play. A synthesis based approach to information fusion is thus well suited for "gray zone" challenges in which there is aggression and ambiguity and which are inherently perspective dependent (e.g., recent events in Ukraine).
DOT National Transportation Integrated Search
2010-06-01
This guidebook provides an overview of the mission and functions of transportation management centers, emergency operations centers, and fusion centers. The guidebook focuses on the types of information these centers produce and manage and how the sh...
DOT National Transportation Integrated Search
2010-06-01
This guidebook provides an overview of the mission and functions of transportation management centers, emergency operations centers, and fusion centers. The guidebook focuses on the types of information these centers produce and manage and how the sh...
NASA Technical Reports Server (NTRS)
Brooks, Colin; Bourgeau-Chavez, Laura; Endres, Sarah; Battaglia, Michael; Shuchman, Robert
2015-01-01
Primary Goal: Assist with the evaluation and measuring of wetlands hydroperiod at the PlumBrook Station using multi-source remote sensing data as part of a larger effort on projecting climate change-related impacts on the station's wetland ecosystems. MTRI expanded on the multi-source remote sensing capabilities to help estimate and measure hydroperiod and the relative soil moisture of wetlands at NASA's Plum Brook Station. Multi-source remote sensing capabilities are useful in estimating and measuring hydroperiod and relative soil moisture of wetlands. This is important as a changing regional climate has several potential risks for wetland ecosystem function. The year two analysis built on the first year of the project by acquiring and analyzing remote sensing data for additional dates and types of imagery, combined with focused field work. Five deliverables were planned and completed: 1) Show the relative length of hydroperiod using available remote sensing datasets 2) Date linked table of wetlands extent over time for all feasible non-forested wetlands 3) Utilize LIDAR data to measure topographic height above sea level of all wetlands, wetland to catchment area radio, slope of wetlands, and other useful variables 4) A demonstration of how analyzed results from multiple remote sensing data sources can help with wetlands vulnerability assessment 5) A MTRI style report summarizing year 2 results. This report serves as a descriptive summary of our completion of these our deliverables. Additionally, two formal meetings were held with Larry Liou and Amanda Sprinzl to provide project updates and receive direction on outputs. These were held on 2/26/15 and 9/17/15 at the Plum Brook Station. Principal Component Analysis (PCA) is a multivariate statistical technique used to identify dominant spatial and temporal backscatter signatures. PCA reduces the information contained in the temporal dataset to the first few new Principal Component (PC) images. Some advantages of PCA include the ability to filter out temporal autocorrelation and reduce speckle to the higher order PC images. A PCA was performed using ERDAS Imagine on a time series of PALSAR dates. Hydroperiod maps were created by separating the PALSAR dates into two date ranges, 2006-2008 and 2010, and performing an unsupervised classification on the PCAs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chugh, Brige Paul; Krishnan, Kalpagam; Liu, Jeff
2014-08-15
Integration of biological conductivity information provided by Electrical Impedance Tomography (EIT) with anatomical information provided by Computed Tomography (CT) imaging could improve the ability to characterize tissues in clinical applications. In this paper, we report results of our study which compared the fusion of EIT with CT using three different image fusion algorithms, namely: weighted averaging, wavelet fusion, and ROI indexing. The ROI indexing method of fusion involves segmenting the regions of interest from the CT image and replacing the pixels with the pixels of the EIT image. The three algorithms were applied to a CT and EIT image ofmore » an anthropomorphic phantom, constructed out of five acrylic contrast targets with varying diameter embedded in a base of gelatin bolus. The imaging performance was assessed using Detectability and Structural Similarity Index Measure (SSIM). Wavelet fusion and ROI-indexing resulted in lower Detectability (by 35% and 47%, respectively) yet higher SSIM (by 66% and 73%, respectively) than weighted averaging. Our results suggest that wavelet fusion and ROI-indexing yielded more consistent and optimal fusion performance than weighted averaging.« less
Adaptive polarization image fusion based on regional energy dynamic weighted average
NASA Astrophysics Data System (ADS)
Zhao, Yong-Qiang; Pan, Quan; Zhang, Hong-Cai
2005-11-01
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations, most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Connected Vehicle Applications : Mobility
DOT National Transportation Integrated Search
2017-03-03
Connected vehicle mobility applications are commonly referred to as dynamic mobility applications (DMAs). DMAs seek to fully leverage frequently collected and rapidly disseminated multi-source data gathered from connected travelers, vehicles, and inf...
Embedding the results of focussed Bayesian fusion into a global context
NASA Astrophysics Data System (ADS)
Sander, Jennifer; Heizmann, Michael
2014-05-01
Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.
Hamilton, Brian S.; Whittaker, Gary R.; Daniel, Susan
2012-01-01
Hemagglutinin (HA) is the viral protein that facilitates the entry of influenza viruses into host cells. This protein controls two critical aspects of entry: virus binding and membrane fusion. In order for HA to carry out these functions, it must first undergo a priming step, proteolytic cleavage, which renders it fusion competent. Membrane fusion commences from inside the endosome after a drop in lumenal pH and an ensuing conformational change in HA that leads to the hemifusion of the outer membrane leaflets of the virus and endosome, the formation of a stalk between them, followed by pore formation. Thus, the fusion machinery is an excellent target for antiviral compounds, especially those that target the conserved stem region of the protein. However, traditional ensemble fusion assays provide a somewhat limited ability to directly quantify fusion partly due to the inherent averaging of individual fusion events resulting from experimental constraints. Inspired by the gains achieved by single molecule experiments and analysis of stochastic events, recently-developed individual virion imaging techniques and analysis of single fusion events has provided critical information about individual virion behavior, discriminated intermediate fusion steps within a single virion, and allowed the study of the overall population dynamics without the loss of discrete, individual information. In this article, we first start by reviewing the determinants of HA fusogenic activity and the viral entry process, highlight some open questions, and then describe the experimental approaches for assaying fusion that will be useful in developing the most effective therapies in the future. PMID:22852045
NASA Astrophysics Data System (ADS)
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Integrating public health and medical intelligence gathering into homeland security fusion centres.
Lenart, Brienne; Albanese, Joseph; Halstead, William; Schlegelmilch, Jeffrey; Paturas, James
Homeland security fusion centres serve to gather, analyse and share threat-related information among all levels of governments and law enforcement agencies. In order to function effectively, fusion centres must employ people with the necessary competencies to understand the nature of the threat facing a community, discriminate between important information and irrelevant or merely interesting facts and apply domain knowledge to interpret the results to obviate or reduce the existing danger. Public health and medical sector personnel routinely gather, analyse and relay health-related inform-ation, including health security risks, associated with the detection of suspicious biological or chemical agents within a community to law enforcement agencies. This paper provides a rationale for the integration of public health and medical personnel in fusion centres and describes their role in assisting law enforcement agencies, public health organisations and the medical sector to respond to natural or intentional threats against local communities, states or the nation as a whole.
Antibacterial Drug Leads: DNA and Enzyme Multitargeting
Zhu, Wei; Wang, Yang; Li, Kai; ...
2015-01-09
Here, we report the results of an investigation of the activity of a series of amidine and bisamidine compounds against Staphylococcus aureus and Escherichia coli. The most active compounds bound to an AT-rich DNA dodecamer (CGCGAATTCGCG) 2 and using DSC were found to increase the melting transition by up to 24 °C. Several compounds also inhibited undecaprenyl diphosphate synthase (UPPS) with IC 50 values of 100–500 nM, and we found good correlations (R 2 = 0.89, S. aureus; R 2 = 0.79, E. coli) between experimental and predicted cell growth inhibition by using DNA ΔT m and UPPS IC 50more » experimental results together with one computed descriptor. Finally, we also solved the structures of three bisamidines binding to DNA as well as three UPPS structures. Overall, the results are of general interest in the context of the development of resistance-resistant antibiotics that involve multitargeting.« less
Multitarget detection algorithm for automotive FMCW radar
NASA Astrophysics Data System (ADS)
Hyun, Eugin; Oh, Woo-Jin; Lee, Jong-Hun
2012-06-01
Today, 77 GHz FMCW (Frequency Modulation Continuous Wave) radar has strong advantages of range and velocity detection for automotive applications. However, FMCW radar brings out ghost targets and missed targets in multi-target situations. In this paper, in order to resolve these limitations, we propose an effective pairing algorithm, which consists of two steps. In the proposed method, a waveform with different slopes in two periods is used. In the 1st pairing processing, all combinations of range and velocity are obtained in each of two wave periods. In the 2nd pairing step, using the results of the 1st pairing processing, fine range and velocity are detected. In that case, we propose the range-velocity windowing technique in order to compensate for the non-ideal beat-frequency characteristic that arises due to the non-linearity of the RF module. Based on experimental results, the performance of the proposed algorithm is improved compared with that of the typical method.
Discovery of multi-target receptor tyrosine kinase inhibitors as novel anti-angiogenesis agents
NASA Astrophysics Data System (ADS)
Wang, Jinfeng; Zhang, Lin; Pan, Xiaoyan; Dai, Bingling; Sun, Ying; Li, Chuansheng; Zhang, Jie
2017-03-01
Recently, we have identified a biphenyl-aryl urea incorporated with salicylaldoxime (BPS-7) as an anti-angiogenesis agent. Herein, we disclosed a series of novel anti-angiogenesis agents with BPS-7 as lead compound through combining diarylureas with N-pyridin-2-ylcyclopropane carboxamide. Several title compounds exhibited simultaneous inhibition effects against three pro-angiogenic RTKs (VEGFR-2, TIE-2 and EphB4). Some of them displayed potent anti-proliferative activity against human vascular endothelial cell (EA.hy926). In particular, two potent compounds (CDAU-1 and CDAU-2) could be considered as promising anti-angiogenesis agents with triplet inhibition profile. The biological evaluation and molecular docking results indicate that N-pyridin-2-ylcyclopropane carboxamide could serve as a hinge-binding group (HBG) for the discovery of multi-target anti-angiogenesis agents. CDAU-2 also exhibited promising anti-angiogenic potency in a tissue model for angiogenesis.
Tassini, Sabrina; Sun, Liang; Lanko, Kristina; Crespan, Emmanuele; Langron, Emily; Falchi, Federico; Kissova, Miroslava; Armijos-Rivera, Jorge I; Delang, Leen; Mirabelli, Carmen; Neyts, Johan; Pieroni, Marco; Cavalli, Andrea; Costantino, Gabriele; Maga, Giovanni; Vergani, Paola; Leyssen, Pieter; Radi, Marco
2017-02-23
Enteroviruses (EVs) are among the most frequent infectious agents in humans worldwide and represent the leading cause of upper respiratory tract infections. No drugs for the treatment of EV infections are currently available. Recent studies have also linked EV infection with pulmonary exacerbations, especially in cystic fibrosis (CF) patients, and the importance of this link is probably underestimated. The aim of this work was to develop a new class of multitarget agents active both as broad-spectrum antivirals and as correctors of the F508del-cystic fibrosis transmembrane conductance regulator (CFTR) folding defect responsible for >90% of CF cases. We report herein the discovery of the first small molecules able to simultaneously act as correctors of the F508del-CFTR folding defect and as broad-spectrum antivirals against a panel of EVs representative of all major species.
Marco-Contelles, José; León, Rafael; de los Ríos, Cristóbal; Samadi, Abdelouahid; Bartolini, Manuela; Andrisano, Vincenza; Huertas, Oscar; Barril, Xavier; Luque, F Javier; Rodríguez-Franco, María I; López, Beatriz; López, Manuela G; García, Antonio G; Carreiras, María do Carmo; Villarroya, Mercedes
2009-05-14
Tacripyrines (1-14) have been designed by combining an AChE inhibitor (tacrine) with a calcium antagonist such as nimodipine and are targeted to develop a multitarget therapeutic strategy to confront AD. Tacripyrines are selective and potent AChE inhibitors in the nanomolar range. The mixed type inhibition of hAChE activity of compound 11 (IC(50) 105 +/- 15 nM) is associated to a 30.7 +/- 8.6% inhibition of the proaggregating action of AChE on the Abeta and a moderate inhibition of Abeta self-aggregation (34.9 +/- 5.4%). Molecular modeling indicates that binding of compound 11 to the AChE PAS mainly involves the (R)-11 enantiomer, which also agrees with the noncompetitive inhibition mechanism exhibited by p-methoxytacripyrine 11. Tacripyrines are neuroprotective agents, show moderate Ca(2+) channel blocking effect, and cross the blood-brain barrier, emerging as lead candidates for treating AD.
The role of fragment-based and computational methods in polypharmacology.
Bottegoni, Giovanni; Favia, Angelo D; Recanatini, Maurizio; Cavalli, Andrea
2012-01-01
Polypharmacology-based strategies are gaining increased attention as a novel approach to obtaining potentially innovative medicines for multifactorial diseases. However, some within the pharmaceutical community have resisted these strategies because they can be resource-hungry in the early stages of the drug discovery process. Here, we report on fragment-based and computational methods that might accelerate and optimize the discovery of multitarget drugs. In particular, we illustrate that fragment-based approaches can be particularly suited for polypharmacology, owing to the inherent promiscuous nature of fragments. In parallel, we explain how computer-assisted protocols can provide invaluable insights into how to unveil compounds theoretically able to bind to more than one protein. Furthermore, several pragmatic aspects related to the use of these approaches are covered, thus offering the reader practical insights on multitarget-oriented drug discovery projects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Gejjalagere Honnappa, Chethan; Mazhuvancherry Kesavan, Unnikrishnan
2016-12-01
Inflammatory diseases are complex, multi-factorial outcomes of evolutionarily conserved tissue repair processes. For decades, non-steroidal anti-inflammatory drugs and cyclooxygenase inhibitors, the primary drugs of choice for the management of inflammatory diseases, addressed individual targets in the arachidonic acid pathway. Unsatisfactory safety and efficacy profiles of the above have necessitated the development of multi-target agents to treat complex inflammatory diseases. Current anti-inflammatory therapies still fall short of clinical needs and the clinical trial results of multi-target therapeutics are anticipated. Additionally, new drug targets are emerging with improved understanding of molecular mechanisms controlling the pathophysiology of inflammation. This review presents an outline of small molecules and drug targets in anti-inflammatory therapeutics with a summary of a newly identified target AMP-activated protein kinase, which constitutes a novel therapeutic pathway in inflammatory pathology. © The Author(s) 2016.
Discovery of multi-target receptor tyrosine kinase inhibitors as novel anti-angiogenesis agents
Wang, Jinfeng; Zhang, Lin; Pan, Xiaoyan; Dai, Bingling; Sun, Ying; Li, Chuansheng; Zhang, Jie
2017-01-01
Recently, we have identified a biphenyl-aryl urea incorporated with salicylaldoxime (BPS-7) as an anti-angiogenesis agent. Herein, we disclosed a series of novel anti-angiogenesis agents with BPS-7 as lead compound through combining diarylureas with N-pyridin-2-ylcyclopropane carboxamide. Several title compounds exhibited simultaneous inhibition effects against three pro-angiogenic RTKs (VEGFR-2, TIE-2 and EphB4). Some of them displayed potent anti-proliferative activity against human vascular endothelial cell (EA.hy926). In particular, two potent compounds (CDAU-1 and CDAU-2) could be considered as promising anti-angiogenesis agents with triplet inhibition profile. The biological evaluation and molecular docking results indicate that N-pyridin-2-ylcyclopropane carboxamide could serve as a hinge-binding group (HBG) for the discovery of multi-target anti-angiogenesis agents. CDAU-2 also exhibited promising anti-angiogenic potency in a tissue model for angiogenesis. PMID:28332573
Multisource inverse-geometry CT. Part I. System concept and development
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Man, Bruno, E-mail: deman@ge.com; Harrison, Dan
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: Themore » authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.« less
Multisource inverse-geometry CT. Part I. System concept and development
De Man, Bruno; Uribe, Jorge; Baek, Jongduk; Harrison, Dan; Yin, Zhye; Longtin, Randy; Roy, Jaydeep; Waters, Bill; Wilson, Colin; Short, Jonathan; Inzinna, Lou; Reynolds, Joseph; Neculaes, V. Bogdan; Frutschy, Kristopher; Senzig, Bob; Pelc, Norbert
2016-01-01
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals. PMID:27487877
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
ERIC Educational Resources Information Center
Glasstone, Samuel
This publication is one of a series of information booklets for the general public published by The United States Atomic Energy Commission. Among the topics discussed are: Importance of Fusion Energy; Conditions for Nuclear Fusion; Thermonuclear Reactions in Plasmas; Plasma Confinement by Magnetic Fields; Experiments With Plasmas; High-Temperature…
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Revealing Spatial Variation and Correlation of Urban Travels from Big Trajectory Data
NASA Astrophysics Data System (ADS)
Li, X.; Tu, W.; Shen, S.; Yue, Y.; Luo, N.; Li, Q.
2017-09-01
With the development of information and communication technology, spatial-temporal data that contain rich human mobility information are growing rapidly. However, the consistency of multi-mode human travel behind multi-source spatial-temporal data is not clear. To this aim, we utilized a week of taxies' and buses' GPS trajectory data and smart card data in Shenzhen, China to extract city-wide travel information of taxi, bus and metro and tested the correlation of multi-mode travel characteristics. Both the global correlation and local correlation of typical travel indicator were examined. The results show that: (1) Significant differences exist in of urban multi-mode travels. The correlation between bus travels and taxi travels, metro travel and taxi travels are globally low but locally high. (2) There are spatial differences of the correlation relationship between bus, metro and taxi travel. These findings help us understanding urban travels deeply therefore facilitate both the transport policy making and human-space interaction research.
Chase: Control of Heterogeneous Autonomous Sensors for Situational Awareness
2016-08-03
remained the discovery and analysis of new foundational methodology for information collection and fusion that exercises rigorous feedback control over...simultaneously achieve quantified information and physical objectives. New foundational methodology for information collection and fusion that exercises...11.2.1. In the general area of novel stochastic systems analysis it seems appropriate to mention the pioneering work on non -Bayesian distributed learning
Yao, Chih-Jung; Yang, Chia-Ming; Chuang, Shuang-En; Yan, Jiann-Long; Liu, Chun-Yen; Chen, Suz-Wen; Yan, Kun-Huang; Lai, Tung-Yuan; Lai, Gi-Ming
2011-01-01
Tien-Hsien Liquid (THL) is a Chinese herbal mixture that has been used worldwide as complementary treatment for cancer patients in the past decade. Recently, THL has been shown to induce apoptosis in various types of solid tumor cells in vitro. However, the underlying molecular mechanisms have not yet been well elucidated. In this study, we explored the effects of THL on acute promyelocytic leukemia (APL) NB4 cells, which could be effectively treated by some traditional Chinese remedies containing arsenic trioxide. The results showed THL could induce G2/M arrest and apoptosis in NB4 cells. Accordingly, the decrease of cyclin A and B1 were observed in THL-treated cells. The THL-induced apoptosis was accompanied with caspase-3 activation and decrease of PML-RARα fusion protein. Moreover, DNA methyltransferase 1 and oncogenic signaling pathways such as Akt/mTOR, Stat3 and ERK were also down-regulated by THL. By using ethyl acetate extraction and silica gel chromatography, an active fraction of THL named as EAS5 was isolated. At about 0.5–1% of the dose of THL, EAS5 appeared to have most of THL-induced multiple molecular targeting effects in NB4 cells. Based on the findings of these multi-targeting effects, THL might be regarding as a complementary and alternative therapeutic agent for refractory APL. PMID:19897545
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, N., E-mail: nwen1@hfhs.org; Snyder, K. C.; Qin, Y.
2016-05-15
Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2more » mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.« less
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Advanced algorithms for distributed fusion
NASA Astrophysics Data System (ADS)
Gelfand, A.; Smith, C.; Colony, M.; Bowman, C.; Pei, R.; Huynh, T.; Brown, C.
2008-03-01
The US Military has been undergoing a radical transition from a traditional "platform-centric" force to one capable of performing in a "Network-Centric" environment. This transformation will place all of the data needed to efficiently meet tactical and strategic goals at the warfighter's fingertips. With access to this information, the challenge of fusing data from across the batttlespace into an operational picture for real-time Situational Awareness emerges. In such an environment, centralized fusion approaches will have limited application due to the constraints of real-time communications networks and computational resources. To overcome these limitations, we are developing a formalized architecture for fusion and track adjudication that allows the distribution of fusion processes over a dynamically created and managed information network. This network will support the incorporation and utilization of low level tracking information within the Army Distributed Common Ground System (DCGS-A) or Future Combat System (FCS). The framework is based on Bowman's Dual Node Network (DNN) architecture that utilizes a distributed network of interlaced fusion and track adjudication nodes to build and maintain a globally consistent picture across all assets.
A proposal to extend our understanding of the global economy
NASA Technical Reports Server (NTRS)
Hough, Robbin R.; Ehlers, Manfred
1991-01-01
Satellites acquire information on a global and repetitive basis. They are thus ideal tools for use when global scale and analysis over time is required. Data from satellites comes in digital form which means that it is ideally suited for incorporation in digital data bases and that it can be evaluated using automated techniques. The development of a global multi-source data set which integrates digital information is proposed regarding some 15,000 major industrial sites worldwide with remotely sensed images of the sites. The resulting data set would provide the basis for a wide variety of studies of the global economy. The preliminary results give promise of a new class of global policy model which is far more detailed and helpful to local policy makers than its predecessors. The central thesis of this proposal is that major industrial sites can be identified and their utilization can be tracked with the aid of satellite images.
NASA Astrophysics Data System (ADS)
Gao, Zhiqiang; Xu, Fuxiang; Song, Debin; Zheng, Xiangyu; Chen, Maosi
2017-09-01
This paper conducted dynamic monitoring over the green tide (large green alga—Ulva prolifera) occurred in the Yellow Sea in 2014 to 2016 by the use of multi-source remote sensing data, including GF-1 WFV, HJ-1A/1B CCD, CBERS-04 WFI, Landsat-7 ETM+ and Landsta-8 OLI, and by the combination of VB-FAH (index of Virtual-Baseline Floating macroAlgae Height) with manual assisted interpretation based on remote sensing and geographic information system technologies. The result shows that unmanned aerial vehicle (UAV) and shipborne platform could accurately monitor the distribution of Ulva prolifera in small spaces, and therefore provide validation data for the result of remote sensing monitoring over Ulva prolifera. The result of this research can provide effective information support for the prevention and control of Ulva prolifera.
Li, Wen-Jie; Zhang, Shi-Huang; Wang, Hui-Min
2011-12-01
Ecosystem services evaluation is a hot topic in current ecosystem management, and has a close link with human beings welfare. This paper summarized the research progress on the evaluation of ecosystem services based on geographic information system (GIS) and remote sensing (RS) technology, which could be reduced to the following three characters, i. e., ecological economics theory is widely applied as a key method in quantifying ecosystem services, GIS and RS technology play a key role in multi-source data acquisition, spatiotemporal analysis, and integrated platform, and ecosystem mechanism model becomes a powerful tool for understanding the relationships between natural phenomena and human activities. Aiming at the present research status and its inadequacies, this paper put forward an "Assembly Line" framework, which was a distributed one with scalable characteristics, and discussed the future development trend of the integration research on ecosystem services evaluation based on GIS and RS technologies.
Personality disorder symptoms are differentially related to divorce frequency.
Disney, Krystle L; Weinstein, Yana; Oltmanns, Thomas F
2012-12-01
Divorce is associated with a multitude of outcomes related to health and well-being. Data from a representative community sample (N = 1,241) of St. Louis residents (ages 55-64) were used to examine associations between personality pathology and divorce in late midlife. Symptoms of the 10 DSM-IV personality disorders were assessed with the Structured Interview for DSM-IV Personality and the Multisource Assessment of Personality Pathology (both self and informant versions). Multiple regression analyses showed Paranoid and Histrionic personality disorder symptoms to be consistently and positively associated with number of divorces across all three sources of personality assessment. Conversely, Avoidant personality disorder symptoms were negatively associated with number of divorces. The present paper provides new information about the relationship between divorce and personality pathology at a developmental stage that is understudied in both domains. PsycINFO Database Record (c) 2012 APA, all rights reserved.
GEOGLAM Crop Monitor Assessment Tool: Developing Monthly Crop Condition Assessments
NASA Astrophysics Data System (ADS)
McGaughey, K.; Becker Reshef, I.; Barker, B.; Humber, M. L.; Nordling, J.; Justice, C. O.; Deshayes, M.
2014-12-01
The Group on Earth Observations (GEO) developed the Global Agricultural Monitoring initiative (GEOGLAM) to improve existing agricultural information through a network of international partnerships, data sharing, and operational research. This presentation will discuss the Crop Monitor component of GEOGLAM, which provides the Agricultural Market Information System (AMIS) with an international, multi-source, and transparent consensus assessment of crop growing conditions, status, and agro-climatic conditions likely to impact global production. This activity covers the four primary crop types (wheat, maize, rice, and soybean) within the main agricultural producing regions of the AMIS countries. These assessments have been produced operationally since September 2013 and are published in the AMIS Market Monitor Bulletin. The Crop Monitor reports provide cartographic and textual summaries of crop conditions as of the 28th of each month, according to crop type. This presentation will focus on the building of international networks, data collection, and data dissemination.
How to retrieve additional information from the multiplicity distributions
NASA Astrophysics Data System (ADS)
Wilk, Grzegorz; Włodarczyk, Zbigniew
2017-01-01
Multiplicity distributions (MDs) P(N) measured in multiparticle production processes are most frequently described by the negative binomial distribution (NBD). However, with increasing collision energy some systematic discrepancies have become more and more apparent. They are usually attributed to the possible multi-source structure of the production process and described using a multi-NBD form of the MD. We investigate the possibility of keeping a single NBD but with its parameters depending on the multiplicity N. This is done by modifying the widely known clan model of particle production leading to the NBD form of P(N). This is then confronted with the approach based on the so-called cascade-stochastic formalism which is based on different types of recurrence relations defining P(N). We demonstrate that a combination of both approaches allows the retrieval of additional valuable information from the MDs, namely the oscillatory behavior of the counting statistics apparently visible in the high energy data.
NASA Astrophysics Data System (ADS)
Coburn, C. A.; Qin, Y.; Zhang, J.; Staenz, K.
2015-12-01
Food security is one of the most pressing issues facing humankind. Recent estimates predict that over one billion people don't have enough food to meet their basic nutritional needs. The ability of remote sensing tools to monitor and model crop production and predict crop yield is essential for providing governments and farmers with vital information to ensure food security. Google Earth Engine (GEE) is a cloud computing platform, which integrates storage and processing algorithms for massive remotely sensed imagery and vector data sets. By providing the capabilities of storing and analyzing the data sets, it provides an ideal platform for the development of advanced analytic tools for extracting key variables used in regional and national food security systems. With the high performance computing and storing capabilities of GEE, a cloud-computing based system for near real-time crop land monitoring was developed using multi-source remotely sensed data over large areas. The system is able to process and visualize the MODIS time series NDVI profile in conjunction with Landsat 8 image segmentation for crop monitoring. With multi-temporal Landsat 8 imagery, the crop fields are extracted using the image segmentation algorithm developed by Baatz et al.[1]. The MODIS time series NDVI data are modeled by TIMESAT [2], a software package developed for analyzing time series of satellite data. The seasonality of MODIS time series data, for example, the start date of the growing season, length of growing season, and NDVI peak at a field-level are obtained for evaluating the crop-growth conditions. The system fuses MODIS time series NDVI data and Landsat 8 imagery to provide information of near real-time crop-growth conditions through the visualization of MODIS NDVI time series and comparison of multi-year NDVI profiles. Stakeholders, i.e., farmers and government officers, are able to obtain crop-growth information at crop-field level online. This unique utilization of GEE in combination with advanced analytic and extraction techniques provides a vital remote sensing tool for decision makers and scientists with a high-degree of flexibility to adapt to different uses.
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A
2016-10-26
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called "Collective Influence (CI)" has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes' significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct "virtual" information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes' importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.
Sánchez-Rodríguez, Aminael; Tejera, Eduardo; Cruz-Monteagudo, Maykel; Borges, Fernanda; Cordeiro, M. Natália D. S.; Le-Thi-Thu, Huong; Pham-The, Hai
2018-01-01
Gastric cancer is the third leading cause of cancer-related mortality worldwide and despite advances in prevention, diagnosis and therapy, it is still regarded as a global health concern. The efficacy of the therapies for gastric cancer is limited by a poor response to currently available therapeutic regimens. One of the reasons that may explain these poor clinical outcomes is the highly heterogeneous nature of this disease. In this sense, it is essential to discover new molecular agents capable of targeting various gastric cancer subtypes simultaneously. Here, we present a multi-objective approach for the ligand-based virtual screening discovery of chemical compounds simultaneously active against the gastric cancer cell lines AGS, NCI-N87 and SNU-1. The proposed approach relays in a novel methodology based on the development of ensemble models for the bioactivity prediction against each individual gastric cancer cell line. The methodology includes the aggregation of one ensemble per cell line using a desirability-based algorithm into virtual screening protocols. Our research leads to the proposal of a multi-targeted virtual screening protocol able to achieve high enrichment of known chemicals with anti-gastric cancer activity. Specifically, our results indicate that, using the proposed protocol, it is possible to retrieve almost 20 more times multi-targeted compounds in the first 1% of the ranked list than what is expected from a uniform distribution of the active ones in the virtual screening database. More importantly, the proposed protocol attains an outstanding initial enrichment of known multi-targeted anti-gastric cancer agents. PMID:29420638
Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.
Franchi, G; Angulo, J; Moreaud, M; Sorbier, L
2018-01-01
The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring
Alldieck, Thiemo; Bahnsen, Chris H.; Moeslund, Thomas B.
2016-01-01
In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion. PMID:27869730
Visual Analytics of integrated Data Systems for Space Weather Purposes
NASA Astrophysics Data System (ADS)
Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo
Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.
Velpuri, Naga Manohar; Senay, Gabriel B.; Rowland, James; Verdin, James P.; Alemu, Henok; Melesse, Assefa M.; Abtew, Wossenu; Setegn, Shimelis G.
2014-01-01
Continental Africa has the highest volume of water stored in wetlands, large lakes, reservoirs, and rivers, yet it suffers from problems such as water availability and access. With climate change intensifying the hydrologic cycle and altering the distribution and frequency of rainfall, the problem of water availability and access will increase further. Famine Early Warning Systems Network (FEWS NET) funded by the United States Agency for International Development (USAID) has initiated a large-scale project to monitor small to medium surface water points in Africa. Under this project, multisource satellite data and hydrologic modeling techniques are integrated to monitor several hundreds of small to medium surface water points in Africa. This approach has been already tested to operationally monitor 41 water points in East Africa. The validation of modeled scaled depths with field-installed gauge data demonstrated the ability of the model to capture both the spatial patterns and seasonal variations. Modeled scaled estimates captured up to 60 % of the observed gauge variability with a mean root-mean-square error (RMSE) of 22 %. The data on relative water level, precipitation, and evapotranspiration (ETo) for water points in East and West Africa were modeled since 1998 and current information is being made available in near-real time. This chapter presents the approach, results from the East African study, and the first phase of expansion activities in the West Africa region. The water point monitoring network will be further expanded to cover much of sub-Saharan Africa. The goal of this study is to provide timely information on the water availability that would support already established FEWS NET activities in Africa. This chapter also presents the potential improvements in modeling approach to be implemented during future expansion in Africa.
Regular Deployment of Wireless Sensors to Achieve Connectivity and Information Coverage
Cheng, Wei; Li, Yong; Jiang, Yi; Yin, Xipeng
2016-01-01
Coverage and connectivity are two of the most critical research subjects in WSNs, while regular deterministic deployment is an important deployment strategy and results in some pattern-based lattice WSNs. Some studies of optimal regular deployment for generic values of rc/rs were shown recently. However, most of these deployments are subject to a disk sensing model, and cannot take advantage of data fusion. Meanwhile some other studies adapt detection techniques and data fusion to sensing coverage to enhance the deployment scheme. In this paper, we provide some results on optimal regular deployment patterns to achieve information coverage and connectivity as a variety of rc/rs, which are all based on data fusion by sensor collaboration, and propose a novel data fusion strategy for deployment patterns. At first the relation between variety of rc/rs and density of sensors needed to achieve information coverage and connectivity is derived in closed form for regular pattern-based lattice WSNs. Then a dual triangular pattern deployment based on our novel data fusion strategy is proposed, which can utilize collaborative data fusion more efficiently. The strip-based deployment is also extended to a new pattern to achieve information coverage and connectivity, and its characteristics are deduced in closed form. Some discussions and simulations are given to show the efficiency of all deployment patterns, including previous patterns and the proposed patterns, to help developers make more impactful WSN deployment decisions. PMID:27529246
Marine natural products for multi-targeted cancer treatment: A future insight.
Kumar, Maushmi S; Adki, Kaveri M
2018-05-30
Cancer is world's second largest alarming disease, which involves abnormal cell growth and have potential to spread to other parts of the body. Most of the available anticancer drugs are designed to act on specific targets by altering the activity of involved transporters and genes. As cancer cells exhibit complex cellular machinery, the regeneration of cancer tissues and chemo resistance towards the therapy has been the main obstacle in cancer treatment. This fact encourages the researchers to explore the multitargeted use of existing medicines to overcome the shortcomings of chemotherapy for alternative and safer treatment strategies. Recent developments in genomics-proteomics and an understanding of the molecular pharmacology of cancer have also challenged researchers to come up with target-based drugs. The literature supports the evidence of natural compounds exhibiting antioxidant, antimitotic, anti-inflammatory, antibiotic as well as anticancer activity. In this review, we have selected marine sponges as a prolific source of bioactive compounds which can be explored for their possible use in cancer and have tried to link their role in cancer pathway. To prove this, we revisited the literature for the selection of cancer genes for the multitargeted use of existing drugs and natural products. We used Cytoscape network analysis and Search tool for retrieval of interacting genes/ proteins (STRING) to study the possible interactions to show the links between the antioxidants, antibiotics, anti-inflammatory and antimitotic agents and their targets for their possible use in cancer. We included total 78 pathways, their genes and natural compounds from the above four pharmacological classes used in cancer treatment for multitargeted approach. Based on the Cytoscape network analysis results, we shortlist 22 genes based on their average shortest path length connecting one node to all other nodes in a network. These selected genes are CDKN2A, FH, VHL, STK11, SUFU, RB1, MEN1, HRPT2, EXT1, 2, CDK4, p14, p16, TSC1, 2, AXIN2, SDBH C, D, NF1, 2, BHD, PTCH, GPC3, CYLD and WT1. The selected genes were analysed using STRING for their protein-protein interactions. Based on the above findings, we propose the selected genes to be considered as major targets and are suggested to be studied for discovering marine natural products as drug lead in cancer treatment. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Distributed Information Fusion through Advanced Multi-Agent Control
2016-10-17
AFRL-AFOSR-JP-TR-2016-0080 Distributed Information Fusion through Advanced Multi-Agent Control Adrian Bishop NATIONAL ICT AUSTRALIA LIMITED Final...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) NATIONAL ICT AUSTRALIA LIMITED L 5 13 GARDEN ST EVELEIGH, 2015
Distributed Information Fusion through Advanced Multi-Agent Control
2016-09-09
AFRL-AFOSR-JP-TR-2016-0080 Distributed Information Fusion through Advanced Multi-Agent Control Adrian Bishop NATIONAL ICT AUSTRALIA LIMITED Final...TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) NATIONAL ICT AUSTRALIA LIMITED L 5 13 GARDEN ST EVELEIGH, 2015
Integrated Dynamic Transit Operations (IDTO) concept of operations.
DOT National Transportation Integrated Search
2012-05-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Kim, Sunghun; Sterling, Bobbie Sue; Latimer, Lara
2010-01-01
Developing focused and relevant health promotion interventions is critical for behavioral change in a low-resource or special population. Evidence-based interventions, however, may not match the specific population or health concern of interest. This article describes the Multi-Source Method (MSM) which, in combination with a workshop format, may be used by health professionals and researchers in health promotion program development. The MSM draws on positive deviance practices and processes, focus groups, community advisors, behavioral change theory, and evidence-based strategies. Use of the MSM is illustrated in development of ethnic-specific weight loss interventions for low-income postpartum women. The MSM may be useful in designing future health programs designed for other special populations for whom existing interventions are unavailable or lack relevance. PMID:20433674
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
Gradient-Type Magnetoelectric Current Sensor with Strong Multisource Noise Suppression.
Zhang, Mingji; Or, Siu Wing
2018-02-14
A novel gradient-type magnetoelectric (ME) current sensor operating in magnetic field gradient (MFG) detection and conversion mode is developed based on a pair of ME composites that have a back-to-back capacitor configuration under a baseline separation and a magnetic biasing in an electrically-shielded and mechanically-enclosed housing. The physics behind the current sensing process is the product effect of the current-induced MFG effect associated with vortex magnetic fields of current-carrying cables (i.e., MFG detection) and the MFG-induced ME effect in the ME composite pair (i.e., MFG conversion). The sensor output voltage is directly obtained from the gradient ME voltage of the ME composite pair and is calibrated against cable current to give the current sensitivity. The current sensing performance of the sensor is evaluated, both theoretically and experimentally, under multisource noises of electric fields, magnetic fields, vibrations, and thermals. The sensor combines the merits of small nonlinearity in the current-induced MFG effect with those of high sensitivity and high common-mode noise rejection rate in the MFG-induced ME effect to achieve a high current sensitivity of 0.65-12.55 mV/A in the frequency range of 10 Hz-170 kHz, a small input-output nonlinearity of <500 ppm, a small thermal drift of <0.2%/℃ in the current range of 0-20 A, and a high common-mode noise rejection rate of 17-28 dB from multisource noises.
Gradient-Type Magnetoelectric Current Sensor with Strong Multisource Noise Suppression
2018-01-01
A novel gradient-type magnetoelectric (ME) current sensor operating in magnetic field gradient (MFG) detection and conversion mode is developed based on a pair of ME composites that have a back-to-back capacitor configuration under a baseline separation and a magnetic biasing in an electrically-shielded and mechanically-enclosed housing. The physics behind the current sensing process is the product effect of the current-induced MFG effect associated with vortex magnetic fields of current-carrying cables (i.e., MFG detection) and the MFG-induced ME effect in the ME composite pair (i.e., MFG conversion). The sensor output voltage is directly obtained from the gradient ME voltage of the ME composite pair and is calibrated against cable current to give the current sensitivity. The current sensing performance of the sensor is evaluated, both theoretically and experimentally, under multisource noises of electric fields, magnetic fields, vibrations, and thermals. The sensor combines the merits of small nonlinearity in the current-induced MFG effect with those of high sensitivity and high common-mode noise rejection rate in the MFG-induced ME effect to achieve a high current sensitivity of 0.65–12.55 mV/A in the frequency range of 10 Hz–170 kHz, a small input-output nonlinearity of <500 ppm, a small thermal drift of <0.2%/℃ in the current range of 0–20 A, and a high common-mode noise rejection rate of 17–28 dB from multisource noises. PMID:29443920