Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline
Zhang, Jie; Li, Qingyang; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin
2017-01-01
Alzheimer’s Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms. PMID:28943731
NASA Technical Reports Server (NTRS)
Kim, H.; Swain, P. H.
1991-01-01
A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.
NASA Technical Reports Server (NTRS)
Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.
1990-01-01
Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.
Application of Ontology Technology in Health Statistic Data Analysis.
Guo, Minjiang; Hu, Hongpu; Lei, Xingyun
2017-01-01
Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.
Collaborative filtering on a family of biological targets.
Erhan, Dumitru; L'heureux, Pierre-Jean; Yue, Shi Yi; Bengio, Yoshua
2006-01-01
Building a QSAR model of a new biological target for which few screening data are available is a statistical challenge. However, the new target may be part of a bigger family, for which we have more screening data. Collaborative filtering or, more generally, multi-task learning, is a machine learning approach that improves the generalization performance of an algorithm by using information from related tasks as an inductive bias. We use collaborative filtering techniques for building predictive models that link multiple targets to multiple examples. The more commonalities between the targets, the better the multi-target model that can be built. We show an example of a multi-target neural network that can use family information to produce a predictive model of an undersampled target. We evaluate JRank, a kernel-based method designed for collaborative filtering. We show their performance on compound prioritization for an HTS campaign and the underlying shared representation between targets. JRank outperformed the neural network both in the single- and multi-target models.
A Parallel Finite Set Statistical Simulator for Multi-Target Detection and Tracking
NASA Astrophysics Data System (ADS)
Hussein, I.; MacMillan, R.
2014-09-01
Finite Set Statistics (FISST) is a powerful Bayesian inference tool for the joint detection, classification and tracking of multi-target environments. FISST is capable of handling phenomena such as clutter, misdetections, and target birth and decay. Implicit within the approach are solutions to the data association and target label-tracking problems. Finally, FISST provides generalized information measures that can be used for sensor allocation across different types of tasks such as: searching for new targets, and classification and tracking of known targets. These FISST capabilities have been demonstrated on several small-scale illustrative examples. However, for implementation in a large-scale system as in the Space Situational Awareness problem, these capabilities require a lot of computational power. In this paper, we implement FISST in a parallel environment for the joint detection and tracking of multi-target systems. In this implementation, false alarms and misdetections will be modeled. Target birth and decay will not be modeled in the present paper. We will demonstrate the success of the method for as many targets as we possibly can in a desktop parallel environment. Performance measures will include: number of targets in the simulation, certainty of detected target tracks, computational time as a function of clutter returns and number of targets, among other factors.
A Joint Multitarget Estimator for the Joint Target Detection and Tracking Filter
2015-06-27
function is the information theoretic part of the problem and aims for entropy maximization, while the second one arises from the constraint in the...objective functions in conflict. The first objective function is the information theo- retic part of the problem and aims for entropy maximization...theory. For the sake of completeness and clarity, we also summarize how each concept is utilized later. Entropy : A random variable is statistically
2013-12-14
population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
NASA Technical Reports Server (NTRS)
Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.
1993-01-01
Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.
Multisource feedback to graduate nurses: a multimethod study.
McPhee, Samantha; Phillips, Nicole M; Ockerby, Cherene; Hutchinson, Alison M
2017-11-01
(1) To explore graduate nurses' perceptions of the influence of multisource feedback on their performance and (2) to explore perceptions of Clinical Nurse Educators involved in providing feedback regarding feasibility and benefit of the approach. Graduate registered nurses are expected to provide high-quality care for patients in demanding and unpredictable clinical environments. Receiving feedback is essential to their development. Performance appraisals are a common method used to provide feedback and typically involve a single source of feedback. Alternatively, multisource feedback allows the learner to gain insight into performance from a variety of perspectives. This study explores multisource feedback in an Australian setting within the graduate nurse context. Multimethod study. Eleven graduates were given structured performance feedback from four raters: Nurse Unit Manager, Clinical Nurse Educator, preceptor and a self-appraisal. Thirteen graduates received standard single-rater appraisals. Data regarding perceptions of feedback for both groups were obtained using a questionnaire. Semistructured interviews were conducted with nurses who received multisource feedback and the educators. In total, 94% (n = 15) of survey respondents perceived feedback was important during the graduate year. Four themes emerged from interviews: informal feedback, appropriateness of raters, elements of delivery and creating an appraisal process that is 'more real'. Multisource feedback was perceived as more beneficial compared to single-rater feedback. Educators saw value in multisource feedback; however, perceived barriers were engaging raters and collating feedback. Some evidence exists to indicate that feedback from multiple sources is valued by graduates. Further research in a larger sample and with more experienced nurses is required. Evidence resulting from this study indicates that multisource feedback is valued by both graduates and educators and informs graduates' development and transition into the role. Thus, a multisource approach to feedback for graduate nurses should be considered. © 2016 John Wiley & Sons Ltd.
Hill, Jacqueline J; Asprey, Anthea; Richards, Suzanne H; Campbell, John L
2012-01-01
Background UK revalidation plans for doctors include obtaining multisource feedback from patient and colleague questionnaires as part of the supporting information for appraisal and revalidation. Aim To investigate GPs' and appraisers' views of using multisource feedback data in appraisal, and of the emerging links between multisource feedback, appraisal, and revalidation. Design and setting A qualitative study in UK general practice. Method In total, 12 GPs who had recently completed the General Medical Council multisource feedback questionnaires and 12 appraisers undertook a semi-structured, telephone interview. A thematic analysis was performed. Results Participants supported multisource feedback for formative development, although most expressed concerns about some elements of its methodology (for example, ‘self’ selection of colleagues, or whether patients and colleagues can provide objective feedback). Some participants reported difficulties in understanding benchmark data and some were upset by their scores. Most accepted the links between appraisal and revalidation, and that multisource feedback could make a positive contribution. However, tensions between the formative processes of appraisal and the summative function of revalidation were identified. Conclusion Participants valued multisource feedback as part of formative assessment and saw a role for it in appraisal. However, concerns about some elements of multisource feedback methodology may undermine its credibility as a tool for identifying poor performance. Proposals linking multisource feedback, appraisal, and revalidation may limit the use of multisource feedback and appraisal for learning and development by some doctors. Careful consideration is required with respect to promoting the accuracy and credibility of such feedback processes so that their use for learning and development, and for revalidation, is maximised. PMID:22546590
Hill, Jacqueline J; Asprey, Anthea; Richards, Suzanne H; Campbell, John L
2012-05-01
UK revalidation plans for doctors include obtaining multisource feedback from patient and colleague questionnaires as part of the supporting information for appraisal and revalidation. To investigate GPs' and appraisers' views of using multisource feedback data in appraisal, and of the emerging links between multisource feedback, appraisal, and revalidation. A qualitative study in UK general practice. In total, 12 GPs who had recently completed the General Medical Council multisource feedback questionnaires and 12 appraisers undertook a semi-structured, telephone interview. A thematic analysis was performed. Participants supported multisource feedback for formative development, although most expressed concerns about some elements of its methodology (for example, 'self' selection of colleagues, or whether patients and colleagues can provide objective feedback). Some participants reported difficulties in understanding benchmark data and some were upset by their scores. Most accepted the links between appraisal and revalidation, and that multisource feedback could make a positive contribution. However, tensions between the formative processes of appraisal and the summative function of revalidation were identified. Participants valued multisource feedback as part of formative assessment and saw a role for it in appraisal. However, concerns about some elements of multisource feedback methodology may undermine its credibility as a tool for identifying poor performance. Proposals linking multisource feedback, appraisal, and revalidation may limit the use of multisource feedback and appraisal for learning and development by some doctors. Careful consideration is required with respect to promoting the accuracy and credibility of such feedback processes so that their use for learning and development, and for revalidation, is maximised.
2010-07-01
Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania State University (PSU), Iona College (Iona), and Tennessee State...License. 14. ABSTRACT The University at Buffalo (UB) Center for Multisource Information Fusion ( CMIF ) along with a team including the Pennsylvania...of CMIF current research on methods for Test and Evaluation ([7], [8]) involving for example large- factor-space experimental design techniques ([9
Multisource geological data mining and its utilization of uranium resources exploration
NASA Astrophysics Data System (ADS)
Zhang, Jie-lin
2009-10-01
Nuclear energy as one of clear energy sources takes important role in economic development in CHINA, and according to the national long term development strategy, many more nuclear powers will be built in next few years, so it is a great challenge for uranium resources exploration. Research and practice on mineral exploration demonstrates that utilizing the modern Earth Observe System (EOS) technology and developing new multi-source geological data mining methods are effective approaches to uranium deposits prospecting. Based on data mining and knowledge discovery technology, this paper uses multi-source geological data to character electromagnetic spectral, geophysical and spatial information of uranium mineralization factors, and provides the technical support for uranium prospecting integrating with field remote sensing geological survey. Multi-source geological data used in this paper include satellite hyperspectral image (Hyperion), high spatial resolution remote sensing data, uranium geological information, airborne radiometric data, aeromagnetic and gravity data, and related data mining methods have been developed, such as data fusion of optical data and Radarsat image, information integration of remote sensing and geophysical data, and so on. Based on above approaches, the multi-geoscience information of uranium mineralization factors including complex polystage rock mass, mineralization controlling faults and hydrothermal alterations have been identified, the metallogenic potential of uranium has been evaluated, and some predicting areas have been located.
Cervo, Silvia; Rovina, Jane; Talamini, Renato; Perin, Tiziana; Canzonieri, Vincenzo; De Paoli, Paolo; Steffan, Agostino
2013-07-30
Efforts to improve patients' understanding of their own medical treatments or research in which they are involved are progressing, especially with regard to informed consent procedures. We aimed to design a multisource informed consent procedure that is easily adaptable to both clinical and research applications, and to evaluate its effectiveness in terms of understanding and awareness, even in less educated patients. We designed a multisource informed consent procedure for patients' enrolment in a Cancer Institute Biobank (CRO-Biobank). From October 2009 to July 2011, a total of 550 cancer patients admitted to the Centro di Riferimento Oncologico IRCCS Aviano, who agreed to contribute to its biobank, were consecutively enrolled. Participants were asked to answer a self-administered questionnaire aim at exploring their understanding of biobanks and their needs for information on this topic, before and after study participation. Chi-square tests were performed on the questionnaire answers, according to gender or education. Of the 430 patients who returned the questionnaire, only 36.5% knew what a biobank was before participating in the study. Patients with less formal education were less informed by some sources (the Internet, newspapers, magazines, and our Institute). The final assessment test, taken after the multisource informed consent procedure, showed more than 95% correct answers. The information received was judged to be very or fairly understandable in almost all cases. More than 95% of patients were aware of participating in a biobank project, and gave helping cancer research (67.5%), moral obligation, and supporting cancer care as main reasons for their involvement. Our multisource informed consent information system allowed a high rate of understanding and awareness of study participation, even among less-educated participants, and could be an effective and easy-to-apply model for others to consider to contribute to a well-informed decision making process in several fields, from clinical practice to research.Further studies are needed to explore the effects on the study comprehension by each source of information, and by other sources suggested by participants in the questionnaire.
Multimethod-Multisource Approach for Assessing High-Technology Training Systems.
ERIC Educational Resources Information Center
Shlechter, Theodore M.; And Others
This investigation examined the value of using a multimethod-multisource approach to assess high-technology training systems. The research strategy was utilized to provide empirical information on the instructional effectiveness of the Reserve Component Virtual Training Program (RCVTP), which was developed to improve the training of Army National…
Multi-Source Evaluation of Interpersonal and Communication Skills of Family Medicine Residents
ERIC Educational Resources Information Center
Leung, Kai-Kuen; Wang, Wei-Dan; Chen, Yen-Yuan
2012-01-01
There is a lack of information on the use of multi-source evaluation to assess trainees' interpersonal and communication skills in Oriental settings. This study is conducted to assess the reliability and applicability of assessing the interpersonal and communication skills of family medicine residents by patients, peer residents, nurses, and…
Dang, Yaoguo; Mao, Wenxin
2018-01-01
In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method. PMID:29510521
Sun, Huifang; Dang, Yaoguo; Mao, Wenxin
2018-03-03
In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method.
Variable cycle control model for intersection based on multi-source information
NASA Astrophysics Data System (ADS)
Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan
2018-05-01
In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.
A Survey of Recent Advances in Particle Filters and Remaining Challenges for Multitarget Tracking
Wang, Xuedong; Sun, Shudong; Corchado, Juan M.
2017-01-01
We review some advances of the particle filtering (PF) algorithm that have been achieved in the last decade in the context of target tracking, with regard to either a single target or multiple targets in the presence of false or missing data. The first part of our review is on remarkable achievements that have been made for the single-target PF from several aspects including importance proposal, computing efficiency, particle degeneracy/impoverishment and constrained/multi-modal systems. The second part of our review is on analyzing the intractable challenges raised within the general multitarget (multi-sensor) tracking due to random target birth and termination, false alarm, misdetection, measurement-to-track (M2T) uncertainty and track uncertainty. The mainstream multitarget PF approaches consist of two main classes, one based on M2T association approaches and the other not such as the finite set statistics-based PF. In either case, significant challenges remain due to unknown tracking scenarios and integrated tracking management. PMID:29168772
The Finnish multisource national forest inventory: small-area estimation and map production
Erkki Tomppo
2009-01-01
A driving force motivating development of the multisource national forest inventory (MS-NFI) in connection with the Finnish national forest inventory (NFI) was the desire to obtain forest resource information for smaller areas than is possible using field data only without significantly increasing the cost of the inventory. A basic requirement for the method was that...
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023
Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier
2012-01-01
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth's resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-01-01
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-12-13
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.
Ren, Yin; Deng, Lu-Ying; Zuo, Shu-Di; Song, Xiao-Dong; Liao, Yi-Lan; Xu, Cheng-Dong; Chen, Qi; Hua, Li-Zhong; Li, Zheng-Wei
2016-09-01
Identifying factors that influence the land surface temperature (LST) of urban forests can help improve simulations and predictions of spatial patterns of urban cool islands. This requires a quantitative analytical method that combines spatial statistical analysis with multi-source observational data. The purpose of this study was to reveal how human activities and ecological factors jointly influence LST in clustering regions (hot or cool spots) of urban forests. Using Xiamen City, China from 1996 to 2006 as a case study, we explored the interactions between human activities and ecological factors, as well as their influences on urban forest LST. Population density was selected as a proxy for human activity. We integrated multi-source data (forest inventory, digital elevation models (DEM), population, and remote sensing imagery) to develop a database on a unified urban scale. The driving mechanism of urban forest LST was revealed through a combination of multi-source spatial data and spatial statistical analysis of clustering regions. The results showed that the main factors contributing to urban forest LST were dominant tree species and elevation. The interactions between human activity and specific ecological factors linearly or nonlinearly increased LST in urban forests. Strong interactions between elevation and dominant species were generally observed and were prevalent in either hot or cold spots areas in different years. In conclusion, quantitative studies based on spatial statistics and GeogDetector models should be conducted in urban areas to reveal interactions between human activities, ecological factors, and LST. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Husain, Syed S; Kalinin, Alexandr; Truong, Anh; Dinov, Ivo D
Intuitive formulation of informative and computationally-efficient queries on big and complex datasets present a number of challenges. As data collection is increasingly streamlined and ubiquitous, data exploration, discovery and analytics get considerably harder. Exploratory querying of heterogeneous and multi-source information is both difficult and necessary to advance our knowledge about the world around us. We developed a mechanism to integrate dispersed multi-source data and service the mashed information via human and machine interfaces in a secure, scalable manner. This process facilitates the exploration of subtle associations between variables, population strata, or clusters of data elements, which may be opaque to standard independent inspection of the individual sources. This a new platform includes a device agnostic tool (Dashboard webapp, http://socr.umich.edu/HTML5/Dashboard/) for graphical querying, navigating and exploring the multivariate associations in complex heterogeneous datasets. The paper illustrates this core functionality and serviceoriented infrastructure using healthcare data (e.g., US data from the 2010 Census, Demographic and Economic surveys, Bureau of Labor Statistics, and Center for Medicare Services) as well as Parkinson's Disease neuroimaging data. Both the back-end data archive and the front-end dashboard interfaces are continuously expanded to include additional data elements and new ways to customize the human and machine interactions. A client-side data import utility allows for easy and intuitive integration of user-supplied datasets. This completely open-science framework may be used for exploratory analytics, confirmatory analyses, meta-analyses, and education and training purposes in a wide variety of fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.; White, Amanda M.; Whitney, Paul D.
2013-06-04
The Multi-Source Signatures for Nuclear Programs project, part of Pacific Northwest National Laboratory’s (PNNL) Signature Discovery Initiative, seeks to computationally capture expert assessment of multi-type information such as text, sensor output, imagery, or audio/video files, to assess nuclear activities through a series of Bayesian network (BN) models. These models incorporate knowledge from a diverse range of information sources in order to help assess a country’s nuclear activities. The models span engineering topic areas, state-level indicators, and facility-specific characteristics. To illustrate the development, calibration, and use of BN models for multi-source assessment, we present a model that predicts a country’s likelihoodmore » to participate in the international nuclear nonproliferation regime. We validate this model by examining the extent to which the model assists non-experts arrive at conclusions similar to those provided by nuclear proliferation experts. We also describe the PNNL-developed software used throughout the lifecycle of the Bayesian network model development.« less
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
Towards Device-Independent Information Processing on General Quantum Networks
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Hoban, Matty J.
2018-01-01
The violation of certain Bell inequalities allows for device-independent information processing secure against nonsignaling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraints that entangled systems can only be transmitted over relatively short distances, large-scale multisource networks have been employed. Do there exist analogs of Bell inequalities for such networks, whose violation is a resource for device independence? In this Letter, the violation of recently derived polynomial Bell inequalities will be shown to allow for device independence on multisource networks, secure against nonsignaling eavesdroppers.
Towards Device-Independent Information Processing on General Quantum Networks.
Lee, Ciarán M; Hoban, Matty J
2018-01-12
The violation of certain Bell inequalities allows for device-independent information processing secure against nonsignaling eavesdroppers. However, this only holds for the Bell network, in which two or more agents perform local measurements on a single shared source of entanglement. To overcome the practical constraints that entangled systems can only be transmitted over relatively short distances, large-scale multisource networks have been employed. Do there exist analogs of Bell inequalities for such networks, whose violation is a resource for device independence? In this Letter, the violation of recently derived polynomial Bell inequalities will be shown to allow for device independence on multisource networks, secure against nonsignaling eavesdroppers.
A statistical approach to combining multisource information in one-class classifiers
Simonson, Katherine M.; Derek West, R.; Hansen, Ross L.; ...
2017-06-08
A new method is introduced in this paper for combining information from multiple sources to support one-class classification. The contributing sources may represent measurements taken by different sensors of the same physical entity, repeated measurements by a single sensor, or numerous features computed from a single measured image or signal. The approach utilizes the theory of statistical hypothesis testing, and applies Fisher's technique for combining p-values, modified to handle nonindependent sources. Classifier outputs take the form of fused p-values, which may be used to gauge the consistency of unknown entities with one or more class hypotheses. The approach enables rigorousmore » assessment of classification uncertainties, and allows for traceability of classifier decisions back to the constituent sources, both of which are important for high-consequence decision support. Application of the technique is illustrated in two challenge problems, one for skin segmentation and the other for terrain labeling. Finally, the method is seen to be particularly effective for relatively small training samples.« less
A statistical approach to combining multisource information in one-class classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonson, Katherine M.; Derek West, R.; Hansen, Ross L.
A new method is introduced in this paper for combining information from multiple sources to support one-class classification. The contributing sources may represent measurements taken by different sensors of the same physical entity, repeated measurements by a single sensor, or numerous features computed from a single measured image or signal. The approach utilizes the theory of statistical hypothesis testing, and applies Fisher's technique for combining p-values, modified to handle nonindependent sources. Classifier outputs take the form of fused p-values, which may be used to gauge the consistency of unknown entities with one or more class hypotheses. The approach enables rigorousmore » assessment of classification uncertainties, and allows for traceability of classifier decisions back to the constituent sources, both of which are important for high-consequence decision support. Application of the technique is illustrated in two challenge problems, one for skin segmentation and the other for terrain labeling. Finally, the method is seen to be particularly effective for relatively small training samples.« less
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-01-01
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145
Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.
Wang, Xuan; Liu, Jinghong; Zhou, Qianfei
2016-12-25
In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
Multisource Data Classification Using A Hybrid Semi-supervised Learning Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L; Shekhar, Shashi
2009-01-01
In many practical situations thematic classes can not be discriminated by spectral measurements alone. Often one needs additional features such as population density, road density, wetlands, elevation, soil types, etc. which are discrete attributes. On the other hand remote sensing image features are continuous attributes. Finding a suitable statistical model and estimation of parameters is a challenging task in multisource (e.g., discrete and continuous attributes) data classification. In this paper we present a semi-supervised learning method by assuming that the samples were generated by a mixture model, where each component could be either a continuous or discrete distribution. Overall classificationmore » accuracy of the proposed method is improved by 12% in our initial experiments.« less
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Field Trials of the Multi-Source Approach for Resistivity and Induced Polarization Data Acquisition
NASA Astrophysics Data System (ADS)
LaBrecque, D. J.; Morelli, G.; Fischanger, F.; Lamoureux, P.; Brigham, R.
2013-12-01
Implementing systems of distributed receivers and transmitters for resistivity and induced polarization data is an almost inevitable result of the availability of wireless data communication modules and GPS modules offering precise timing and instrument locations. Such systems have a number of advantages; for example, they can be deployed around obstacles such as rivers, canyons, or mountains which would be difficult with traditional 'hard-wired' systems. However, deploying a system of identical, small, battery powered, transceivers, each capable of injecting a known current and measuring the induced potential has an additional and less obvious advantage in that multiple units can inject current simultaneously. The original purpose for using multiple simultaneous current sources (multi-source) was to increase signal levels. In traditional systems, to double the received signal you inject twice the current which requires you to apply twice the voltage and thus four times the power. Alternatively, one approach to increasing signal levels for large-scale surveys collected using small, battery powered transceivers is it to allow multiple units to transmit in parallel. In theory, using four 400 watt transmitters on separate, parallel dipoles yields roughly the same signal as a single 6400 watt transmitter. Furthermore, implementing the multi-source approach creates the opportunity to apply more complex current flow patterns than simple, parallel dipoles. For a perfect, noise-free system, multi-sources adds no new information to a data set that contains a comprehensive set of data collected using single sources. However, for realistic, noisy systems, it appears that multi-source data can substantially impact survey results. In preliminary model studies, the multi-source data produced such startling improvements in subsurface images that even the authors questioned their veracity. Between December of 2012 and July of 2013, we completed multi-source surveys at five sites with depths of exploration ranging from 150 to 450 m. The sites included shallow geothermal sites near Reno Nevada, Pomarance Italy, and Volterra Italy; a mineral exploration site near Timmins Quebec; and a landslide investigation near Vajont Dam in northern Italy. These sites provided a series of challenges in survey design and deployment including some extremely difficult terrain and a broad range of background resistivity and induced values. Despite these challenges, comparison of multi-source results to resistivity and induced polarization data collection with more traditional methods support the thesis that the multi-source approach is capable of providing substantial improvements in both depth of penetration and resolution over conventional approaches.
Non-ad-hoc decision rule for the Dempster-Shafer method of evidential reasoning
NASA Astrophysics Data System (ADS)
Cheaito, Ali; Lecours, Michael; Bosse, Eloi
1998-03-01
This paper is concerned with the fusion of identity information through the use of statistical analysis rooted in Dempster-Shafer theory of evidence to provide automatic identification aboard a platform. An identity information process for a baseline Multi-Source Data Fusion (MSDF) system is defined. The MSDF system is applied to information sources which include a number of radars, IFF systems, an ESM system, and a remote track source. We use a comprehensive Platform Data Base (PDB) containing all the possible identity values that the potential target may take, and we use the fuzzy logic strategies which enable the fusion of subjective attribute information from sensor and the PDB to make the derivation of target identity more quickly, more precisely, and with statistically quantifiable measures of confidence. The conventional Dempster-Shafer lacks a formal basis upon which decision can be made in the face of ambiguity. We define a non-ad hoc decision rule based on the expected utility interval for pruning the `unessential' propositions which would otherwise overload the real-time data fusion systems. An example has been selected to demonstrate the implementation of our modified Dempster-Shafer method of evidential reasoning.
The role of multi-target policy instruments in agri-environmental policy mixes.
Schader, Christian; Lampkin, Nicholas; Muller, Adrian; Stolze, Matthias
2014-12-01
The Tinbergen Rule has been used to criticise multi-target policy instruments for being inefficient. The aim of this paper is to clarify the role of multi-target policy instruments using the case of agri-environmental policy. Employing an analytical linear optimisation model, this paper demonstrates that there is no general contradiction between multi-target policy instruments and the Tinbergen Rule, if multi-target policy instruments are embedded in a policy-mix with a sufficient number of targeted instruments. We show that the relation between cost-effectiveness of the instruments, related to all policy targets, is the key determinant for an economically sound choice of policy instruments. If economies of scope with respect to achieving policy targets are realised, a higher cost-effectiveness of multi-target policy instruments can be achieved. Using the example of organic farming support policy, we discuss several reasons why economies of scope could be realised by multi-target agri-environmental policy instruments. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multi-Target State Extraction for the SMC-PHD Filter
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-01-01
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274
35-GHz radar sensor for automotive collision avoidance
NASA Astrophysics Data System (ADS)
Zhang, Jun
1999-07-01
This paper describes the development of a radar sensor system used for automotive collision avoidance. Because the heavy truck may have great larger radar cross section than a motorcyclist has, the radar receiver may have a large dynamic range. And multi-targets at different speed may confuse the echo spectrum causing the ambiguity between range and speed of target. To get more information about target and background and to adapt to the large dynamic range and multi-targets, a frequency modulated and pseudo- random binary sequences phase modulated continuous wave radar system is described. The analysis of this double- modulation system is given. A high-speed signal processing and data processing component are used to process and combine the data and information from echo at different direction and at every moment.
Li, Ying Hong; Wang, Pan Pan; Li, Xiao Xu; Yu, Chun Yan; Yang, Hong; Zhou, Jin; Xue, Wei Wei; Tan, Jun; Zhu, Feng
2016-01-01
The human kinome is one of the most productive classes of drug target, and there is emerging necessity for treating complex diseases by means of polypharmacology (multi-target drugs and combination products). However, the advantages of the multi-target drugs and the combination products are still under debate. A comparative analysis between FDA approved multi-target drugs and combination products, targeting the human kinome, was conducted by mapping targets onto the phylogenetic tree of the human kinome. The approach of network medicine illustrating the drug-target interactions was applied to identify popular targets of multi-target drugs and combination products. As identified, the multi-target drugs tended to inhibit target pairs in the human kinome, especially the receptor tyrosine kinase family, while the combination products were able to against targets of distant homology relationship. This finding asked for choosing the combination products as a better solution for designing drugs aiming at targets of distant homology relationship. Moreover, sub-networks of drug-target interactions in specific disease were generated, and mechanisms shared by multi-target drugs and combination products were identified. In conclusion, this study performed an analysis between approved multi-target drugs and combination products against the human kinome, which could assist the discovery of next generation polypharmacology.
Multisource Data Integration in Remote Sensing
NASA Technical Reports Server (NTRS)
Tilton, James C. (Editor)
1991-01-01
Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system.
Abdolmaleki, Azizeh; Ghasemi, Jahan B
2017-01-01
Finding high quality beginning compounds is a critical job at the start of the lead generation stage for multi-target drug discovery (MTDD). Designing hybrid compounds as selective multitarget chemical entity is a challenge, opportunity, and new idea to better act against specific multiple targets. One hybrid molecule is formed by two (or more) pharmacophore group's participation. So, these new compounds often exhibit two or more activities going about as multi-target drugs (mtdrugs) and may have superior safety or efficacy. Application of integrating a range of information and sophisticated new in silico, bioinformatics, structural biology, pharmacogenomics methods may be useful to discover/design, and synthesis of the new hybrid molecules. In this regard, many rational and screening approaches have followed by medicinal chemists for the lead generation in MTDD. Here, we review some popular lead generation approaches that have been used for designing multiple ligands (DMLs). This paper focuses on dual- acting chemical entities that incorporate a part of two drugs or bioactive compounds to compose hybrid molecules. Also, it presents some of key concepts and limitations/strengths of lead generation methods by comparing combination framework method with screening approaches. Besides, a number of examples to represent applications of hybrid molecules in the drug discovery are included. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The design and implementation of hydrographical information management system (HIMS)
NASA Astrophysics Data System (ADS)
Sui, Haigang; Hua, Li; Wang, Qi; Zhang, Anming
2005-10-01
With the development of hydrographical work and information techniques, the large variety of hydrographical information including electronic charts, documents and other materials are widely used, and the traditional management mode and techniques are unsuitable for the development of the Chinese Marine Safety Administration Bureau (CMSAB). How to manage all kinds of hydrographical information has become an important and urgent problem. A lot of advanced techniques including GIS, RS, spatial database management and VR techniques are introduced for solving these problems. Some design principles and key techniques of the HIMS including the mixed mode base on B/S, C/S and stand-alone computer mode, multi-source & multi-scale data organization and management, multi-source data integration and diverse visualization of digital chart, efficient security control strategies are illustrated in detail. Based on the above ideas and strategies, an integrated system named Hydrographical Information Management System (HIMS) was developed. And the HIMS has been applied in the Shanghai Marine Safety Administration Bureau and obtained good evaluation.
Mátyus, Péter; Chai, Christina L L
2016-06-20
Multitargeting is a valuable concept in drug design for the development of effective drugs for the treatment of multifactorial diseases. This concept has most frequently been realized by incorporating two or more pharmacophores into a single hybrid molecule. Many such hybrids, due to the increased molecular size, exhibit unfavorable physicochemical properties leading to adverse effects and/or an inappropriate ADME (absorption, distribution, metabolism, and excretion) profile. To avoid this limitation and achieve additional therapeutic benefits, here we describe a novel multitargeting strategy based on the synergistic effects of a parent drug and its active metabolite(s). The concept of metabolism-activated multitargeting (MAMUT) is illustrated using a number of examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fan, Yuanjie; Yin, Yuehong
2013-12-01
Although exoskeletons have received enormous attention and have been widely used in gait training and walking assistance in recent years, few reports addressed their application during early poststroke rehabilitation. This paper presents a healthcare technology for active and progressive early rehabilitation using multisource information fusion from surface electromyography and force-position extended physiological proprioception. The active-compliance control based on interaction force between patient and exoskeleton is applied to accelerate the recovery of the neuromuscular function, whereby progressive treatment through timely evaluation contributes to an effective and appropriate physical rehabilitation. Moreover, a clinic-oriented rehabilitation system, wherein a lower extremity exoskeleton with active compliance is mounted on a standing bed, is designed to ensure comfortable and secure rehabilitation according to the structure and control requirements. Preliminary experiments and clinical trial demonstrate valuable information on the feasibility, safety, and effectiveness of the progressive exoskeleton-assisted training.
NASA Astrophysics Data System (ADS)
Ni, X. Y.; Huang, H.; Du, W. P.
2017-02-01
The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.
How to retrieve additional information from the multiplicity distributions
NASA Astrophysics Data System (ADS)
Wilk, Grzegorz; Włodarczyk, Zbigniew
2017-01-01
Multiplicity distributions (MDs) P(N) measured in multiparticle production processes are most frequently described by the negative binomial distribution (NBD). However, with increasing collision energy some systematic discrepancies have become more and more apparent. They are usually attributed to the possible multi-source structure of the production process and described using a multi-NBD form of the MD. We investigate the possibility of keeping a single NBD but with its parameters depending on the multiplicity N. This is done by modifying the widely known clan model of particle production leading to the NBD form of P(N). This is then confronted with the approach based on the so-called cascade-stochastic formalism which is based on different types of recurrence relations defining P(N). We demonstrate that a combination of both approaches allows the retrieval of additional valuable information from the MDs, namely the oscillatory behavior of the counting statistics apparently visible in the high energy data.
A research on the positioning technology of vehicle navigation system from single source to "ASPN"
NASA Astrophysics Data System (ADS)
Zhang, Jing; Li, Haizhou; Chen, Yu; Chen, Hongyue; Sun, Qian
2017-10-01
Due to the suddenness and complexity of modern warfare, land-based weapon systems need to have precision strike capability on roads and railways. The vehicle navigation system is one of the most important equipments for the land-based weapon systems that have precision strick capability. There are inherent shortcomings for single source navigation systems to provide continuous and stable navigation information. To overcome the shortcomings, the multi-source positioning technology is developed. The All Source Positioning and Navigaiton (ASPN) program was proposed in 2010, which seeks to enable low cost, robust, and seamless navigation solutions for military to use on any operational platform and in any environment with or without GPS. The development trend of vehicle positioning technology was reviewed in this paper. The trend indicates that the positioning technology is developed from single source and multi-source to ASPN. The data fusion techniques based on multi-source and ASPN was analyzed in detail.
Mercury⊕: An evidential reasoning image classifier
NASA Astrophysics Data System (ADS)
Peddle, Derek R.
1995-12-01
MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.
Long-term monitoring on environmental disasters using multi-source remote sensing technique
NASA Astrophysics Data System (ADS)
Kuo, Y. C.; Chen, C. F.
2017-12-01
Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.
Dresen, S; Ferreirós, N; Gnann, H; Zimmermann, R; Weinmann, W
2010-04-01
The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).
General practitioner registrars' experiences of multisource feedback: a qualitative study.
Findlay, Nigel
2012-09-01
To explore the experiences of general practitioner (GP) specialty training registrars, thereby generating more understanding of the ways in which multisource feedback impacts upon their self-perceptions and professional behaviour, and provide information that might guide its use in the revalidation process of practising GPs. Complete transcripts of semi-structured, audio-taped qualitative interviews were analysed using the constant comparative method, to describe the experiences of multisource feedback for individual registrars. Five GP registrars participated. The first theme to emerge was the importance of the educational supervisor in encouraging the registrar through the emotional response, then facilitating interpretation of feedback and personal development. The second was the differing attitudes to learning and development, which may be in conflict with threats to self-image. The current RCGP format for obtaining multisource feedback for GP registrars may not always be achieving its purpose of challenging self-perceptions and motivating improved performance. An enhanced qualitative approach, through personal interviews rather than anonymous questionnaires, may provide a more accurate picture. This would address the concerns of some registrars by reducing their logistical burden and may facilitate more constructive feedback. The educational supervisor has an important role in promoting personal development, once this feedback is shared. The challenge for teaching organisations is to create a climate of comfort for learning, yet encourage learning beyond a 'comfort zone'.
Mashup Scheme Design of Map Tiles Using Lightweight Open Source Webgis Platform
NASA Astrophysics Data System (ADS)
Hu, T.; Fan, J.; He, H.; Qin, L.; Li, G.
2018-04-01
To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
USDA-ARS?s Scientific Manuscript database
In the midst of this genomics era, major plant genome databases are collecting massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc., as well as textual descriptions of many of these entities. While basic browsing and sear...
Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yunsong; Schuster, Gerard T.
The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.
Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data
Huang, Yunsong; Schuster, Gerard T.
2017-10-26
The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.
Kaur, Gaganpreet; Kaur, Maninder; Silakari, Om
2014-01-01
The recent research area endeavors to discover ultimate multi-target ligands, an increasingly feasible and attractive alternative to existing mono-targeted drugs for treatment of complex, multi-factorial inflammation process which underlays plethora of debilitated health conditions. In order to improvise this option, exploration of relevant chemical core scaffold will be an utmost need. Privileged benzimidazole scaffold being historically versatile structural motif could offer a viable starting point in the search for novel multi-target ligands against multi-factorial inflammation process since, when appropriately substituted, it can selectively modulate diverse receptors, pathways and enzymes associated with the pathogenesis of inflammation. Despite this remarkable capability, the multi-target capacity of the benzimidazole scaffold remains largely unexploited. With this in focus, the present review article attempts to provide synopsis of published research to exemplify the valuable use of benzimidazole nucleus and focus on their suitability as starting scaffold to develop multi-targeted anti-inflammatory ligands.
The Multi-energy High precision Data Processor Based on AD7606
NASA Astrophysics Data System (ADS)
Zhao, Chen; Zhang, Yanchi; Xie, Da
2017-11-01
This paper designs an information collector based on AD7606 to realize the high-precision simultaneous acquisition of multi-source information of multi-energy systems to form the information platform of the energy Internet at Laogang with electricty as its major energy source. Combined with information fusion technologies, this paper analyzes the data to improve the overall energy system scheduling capability and reliability.
NASA Astrophysics Data System (ADS)
Li, J.; Wen, G.; Li, D.
2018-04-01
Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.
Modeling multi-source flooding disaster and developing simulation framework in Delta
NASA Astrophysics Data System (ADS)
Liu, Y.; Cui, X.; Zhang, W.
2016-12-01
Most Delta regions of the world are densely populated and with advanced economies. However, due to impact of the multi-source flooding (upstream flood, rainstorm waterlogging, storm surge flood), the Delta regions is very vulnerable. The academic circles attach great importance to the multi-source flooding disaster in these areas. The Pearl River Delta urban agglomeration in south China is selected as the research area. Based on analysis of natural and environmental characteristics data of the Delta urban agglomeration(remote sensing data, land use data, topographic map, etc.), hydrological monitoring data, research of the uneven distribution and process of regional rainfall, the relationship between the underlying surface and the parameters of runoff, effect of flood storage pattern, we use an automatic or semi-automatic method for dividing spatial units to reflect the runoff characteristics in urban agglomeration, and develop an Multi-model Ensemble System in changing environment, including urban hydrologic model, parallel computational 1D&2D hydrodynamic model, storm surge forecast model and other professional models, the system will have the abilities like real-time setting a variety of boundary conditions, fast and real-time calculation, dynamic presentation of results, powerful statistical analysis function. The model could be optimized and improved by a variety of verification methods. This work was supported by the National Natural Science Foundation of China (41471427); Special Basic Research Key Fund for Central Public Scientific Research Institutes.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M
2017-02-01
Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.
A novel multi-target regression framework for time-series prediction of drug efficacy.
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-18
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.
A novel multi-target regression framework for time-series prediction of drug efficacy
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-01
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186
NASA Astrophysics Data System (ADS)
Basant, Nikita; Gupta, Shikha
2018-03-01
The reactions of molecular ozone (O3), hydroxyl (•OH) and nitrate (NO3) radicals are among the major pathways of removal of volatile organic compounds (VOCs) in the atmospheric environment. The gas-phase kinetic rate constants (kO3, kOH, kNO3) are thus, important in assessing the ultimate fate and exposure risk of atmospheric VOCs. Experimental data for rate constants are not available for many emerging VOCs and the computational methods reported so far address a single target modeling only. In this study, we have developed a multi-target (mt) QSPR model for simultaneous prediction of multiple kinetic rate constants (kO3, kOH, kNO3) of diverse organic chemicals considering an experimental data set of VOCs for which values of all the three rate constants are available. The mt-QSPR model identified and used five descriptors related to the molecular size, degree of saturation and electron density in a molecule, which were mechanistically interpretable. These descriptors successfully predicted three rate constants simultaneously. The model yielded high correlations (R2 = 0.874-0.924) between the experimental and simultaneously predicted endpoint rate constant (kO3, kOH, kNO3) values in test arrays for all the three systems. The model also passed all the stringent statistical validation tests for external predictivity. The proposed multi-target QSPR model can be successfully used for predicting reactivity of new VOCs simultaneously for their exposure risk assessment.
Objected-oriented remote sensing image classification method based on geographic ontology model
NASA Astrophysics Data System (ADS)
Chu, Z.; Liu, Z. J.; Gu, H. Y.
2016-11-01
Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.
ERIC Educational Resources Information Center
Goldring, Ellen B.; Mavrogordato, Madeline; Haynes, Katherine Taylor
2015-01-01
Purpose: A relatively new approach to principal evaluation is the use of multisource feedback, which typically entails a leader's self-evaluation as well as parallel evaluations from subordinates, peers, and/or superiors. However, there is little research on how principals interact with evaluation data from multisource feedback systems. This…
NASA Astrophysics Data System (ADS)
Heitlager, Ilja; Helms, Remko; Brinkkemper, Sjaak
Information Technology Outsourcing practice and research mainly considers the outsourcing phenomenon as a generic fulfilment of the IT function by external parties. Inspired by the logic of commodity, core competencies and economies of scale; assets, existing departments and IT functions are transferred to external parties. Although the generic approach might work for desktop outsourcing, where standardisation is the dominant factor, it does not work for the management of mission critical applications. Managing mission critical applications requires a different approach where building relationships is critical. The relationships involve inter and intra organisational parties in a multi-sourcing arrangement, called an IT service chain, consisting of multiple (specialist) parties that have to collaborate closely to deliver high quality services.
NASA Astrophysics Data System (ADS)
Huang, W.; Jiang, J.; Zha, Z.; Zhang, H.; Wang, C.; Zhang, J.
2014-04-01
Geospatial data resources are the foundation of the construction of geo portal which is designed to provide online geoinformation services for the government, enterprise and public. It is vital to keep geospatial data fresh, accurate and comprehensive in order to satisfy the requirements of application and development of geographic location, route navigation, geo search and so on. One of the major problems we are facing is data acquisition. For us, integrating multi-sources geospatial data is the mainly means of data acquisition. This paper introduced a practice integration approach of multi-source geospatial data with different data model, structure and format, which provided the construction of National Geospatial Information Service Platform of China (NGISP) with effective technical supports. NGISP is the China's official geo portal which provides online geoinformation services based on internet, e-government network and classified network. Within the NGISP architecture, there are three kinds of nodes: national, provincial and municipal. Therefore, the geospatial data is from these nodes and the different datasets are heterogeneous. According to the results of analysis of the heterogeneous datasets, the first thing we do is to define the basic principles of data fusion, including following aspects: 1. location precision; 2.geometric representation; 3. up-to-date state; 4. attribute values; and 5. spatial relationship. Then the technical procedure is researched and the method that used to process different categories of features such as road, railway, boundary, river, settlement and building is proposed based on the principles. A case study in Jiangsu province demonstrated the applicability of the principle, procedure and method of multi-source geospatial data integration.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
Sun, Xinglong; Xu, Tingfa; Zhang, Jizhou; Zhao, Zishu; Li, Yuankun
2017-07-26
In this paper, we propose a novel automatic multi-target registration framework for non-planar infrared-visible videos. Previous approaches usually analyzed multiple targets together and then estimated a global homography for the whole scene, however, these cannot achieve precise multi-target registration when the scenes are non-planar. Our framework is devoted to solving the problem using feature matching and multi-target tracking. The key idea is to analyze and register each target independently. We present a fast and robust feature matching strategy, where only the features on the corresponding foreground pairs are matched. Besides, new reservoirs based on the Gaussian criterion are created for all targets, and a multi-target tracking method is adopted to determine the relationships between the reservoirs and foreground blobs. With the matches in the corresponding reservoir, the homography of each target is computed according to its moving state. We tested our framework on both public near-planar and non-planar datasets. The results demonstrate that the proposed framework outperforms the state-of-the-art global registration method and the manual global registration matrix in all tested datasets.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
NASA Astrophysics Data System (ADS)
Gao, M.; Huang, S. T.; Wang, P.; Zhao, Y. A.; Wang, H. B.
2016-11-01
The geological disposal of high-level radioactive waste (hereinafter referred to "geological disposal") is a long-term, complex, and systematic scientific project, whose data and information resources in the research and development ((hereinafter referred to ”R&D”) process provide the significant support for R&D of geological disposal system, and lay a foundation for the long-term stability and safety assessment of repository site. However, the data related to the research and engineering in the sitting of the geological disposal repositories is more complicated (including multi-source, multi-dimension and changeable), the requirements for the data accuracy and comprehensive application has become much higher than before, which lead to the fact that the data model design of geo-information database for the disposal repository are facing more serious challenges. In the essay, data resources of the pre-selected areas of the repository has been comprehensive controlled and systematic analyzed. According to deeply understanding of the application requirements, the research work has made a solution for the key technical problems including reasonable classification system of multi-source data entity, complex logic relations and effective physical storage structures. The new solution has broken through data classification and conventional spatial data the organization model applied in the traditional industry, realized the data organization and integration with the unit of data entities and spatial relationship, which were independent, holonomic and with application significant features in HLW geological disposal. The reasonable, feasible and flexible data conceptual models, logical models and physical models have been established so as to ensure the effective integration and facilitate application development of multi-source data in pre-selected areas for geological disposal.
Information Weighted Consensus for Distributed Estimation in Vision Networks
ERIC Educational Resources Information Center
Kamal, Ahmed Tashrif
2013-01-01
Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…
Unified Research on Network-Based Hard/Soft Information Fusion
2016-02-02
types). There are a number of search tree run parameters which must be set depending on the experimental setting. A pilot study was run to identify...Unlimited Final Report: Unified Research on Network-Based Hard/Soft Information Fusion The views, opinions and/or findings contained in this report...Final Report: Unified Research on Network-Based Hard/Soft Information Fusion Report Title The University at Buffalo (UB) Center for Multisource
Runtime Simulation for Post-Disaster Data Fusion Visualization
2006-10-01
Center for Multisource Information Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA kesh@eng.buffalo.edu ABSTRACT...Fusion ( CMIF ) The State University of New York at Buffalo Buffalo, NY 14260 USA 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Multisource least-squares reverse-time migration with structure-oriented filtering
NASA Astrophysics Data System (ADS)
Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong
2016-09-01
The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.
Zheng, Chunli; Wang, Jinan; Liu, Jianling; Pei, Mengjie; Huang, Chao; Wang, Yonghua
2014-08-01
The term systems pharmacology describes a field of study that uses computational and experimental approaches to broaden the view of drug actions rooted in molecular interactions and advance the process of drug discovery. The aim of this work is to stick out the role that the systems pharmacology plays across the multi-target drug discovery from natural products for cardiovascular diseases (CVDs). Firstly, based on network pharmacology methods, we reconstructed the drug-target and target-target networks to determine the putative protein target set of multi-target drugs for CVDs treatment. Secondly, we reintegrated a compound dataset of natural products and then obtained a multi-target compounds subset by virtual-screening process. Thirdly, a drug-likeness evaluation was applied to find the ADME-favorable compounds in this subset. Finally, we conducted in vitro experiments to evaluate the reliability of the selected chemicals and targets. We found that four of the five randomly selected natural molecules can effectively act on the target set for CVDs, indicating the reasonability of our systems-based method. This strategy may serve as a new model for multi-target drug discovery of complex diseases.
Foundational Technologies for Activity-Based Intelligence - A Review of the Literature
2014-02-01
academic community. The Center for Multisource Information Fusion ( CMIF ) at the University at Buffalo, Harvard University, and the University of...depth of researchers conducting high-value Multi-INT research; these efforts 26 are delivering high-value research outcomes, e.g., [46-47]. CMIF
Satisfaction Formation Processes in Library Users: Understanding Multisource Effects
ERIC Educational Resources Information Center
Shi, Xi; Holahan, Patricia J.; Jurkat, M. Peter
2004-01-01
This study explores whether disconfirmation theory can explain satisfaction formation processes in library users. Both library users' needs and expectations are investigated as disconfirmation standards. Overall library user satisfaction is predicted to be a function of two independent sources--satisfaction with the information product received…
Challenges with secondary use of multi-source water-quality data in the United States
Sprague, Lori A.; Oelsner, Gretchen P.; Argue, Denise M.
2017-01-01
Combining water-quality data from multiple sources can help counterbalance diminishing resources for stream monitoring in the United States and lead to important regional and national insights that would not otherwise be possible. Individual monitoring organizations understand their own data very well, but issues can arise when their data are combined with data from other organizations that have used different methods for reporting the same common metadata elements. Such use of multi-source data is termed “secondary use”—the use of data beyond the original intent determined by the organization that collected the data. In this study, we surveyed more than 25 million nutrient records collected by 488 organizations in the United States since 1899 to identify major inconsistencies in metadata elements that limit the secondary use of multi-source data. Nearly 14.5 million of these records had missing or ambiguous information for one or more key metadata elements, including (in decreasing order of records affected) sample fraction, chemical form, parameter name, units of measurement, precise numerical value, and remark codes. As a result, metadata harmonization to make secondary use of these multi-source data will be time consuming, expensive, and inexact. Different data users may make different assumptions about the same ambiguous data, potentially resulting in different conclusions about important environmental issues. The value of these ambiguous data is estimated at \\$US12 billion, a substantial collective investment by water-resource organizations in the United States. By comparison, the value of unambiguous data is estimated at \\$US8.2 billion. The ambiguous data could be preserved for uses beyond the original intent by developing and implementing standardized metadata practices for future and legacy water-quality data throughout the United States.
Favia, Angelo D; Habrant, Damien; Scarpelli, Rita; Migliore, Marco; Albani, Clara; Bertozzi, Sine Mandrup; Dionisi, Mauro; Tarozzo, Glauco; Piomelli, Daniele; Cavalli, Andrea; De Vivo, Marco
2012-10-25
Pain and inflammation are major therapeutic areas for drug discovery. Current drugs for these pathologies have limited efficacy, however, and often cause a number of unwanted side effects. In the present study, we identify the nonsteroidal anti-inflammatory drug carprofen as a multitarget-directed ligand that simultaneously inhibits cyclooxygenase-1 (COX-1), COX-2, and fatty acid amide hydrolase (FAAH). Additionally, we synthesized and tested several derivatives of carprofen, sharing this multitarget activity. This may result in improved analgesic efficacy and reduced side effects (Naidu et al. J. Pharmacol. Exp. Ther.2009, 329, 48-56; Fowler, C. J.; et al. J. Enzyme Inhib. Med. Chem.2012, in press; Sasso et al. Pharmacol. Res.2012, 65, 553). The new compounds are among the most potent multitarget FAAH/COX inhibitors reported so far in the literature and thus may represent promising starting points for the discovery of new analgesic and anti-inflammatory drugs.
On Meaningful Measurement: Concepts, Technology and Examples.
ERIC Educational Resources Information Center
Cheung, K. C.
This paper discusses how concepts and procedural skills in problem-solving tasks, as well as affects and emotions, can be subjected to meaningful measurement (MM), based on a multisource model of learning and a constructivist information-processing theory of knowing. MM refers to the quantitative measurement of conceptual and procedural knowledge…
Cross-Modulation Interference with Lateralization of Mixed-Modulated Waveforms
ERIC Educational Resources Information Center
Hsieh, I-Hui; Petrosyan, Agavni; Goncalves, Oscar F.; Hickok, Gregory; Saberi, Kourosh
2010-01-01
Purpose: This study investigated the ability to use spatial information in mixed-modulated (MM) sounds containing concurrent frequency-modulated (FM) and amplitude-modulated (AM) sounds by exploring patterns of interference when different modulation types originated from different loci as may occur in a multisource acoustic field. Method:…
Ambure, Pravin; Bhat, Jyotsna; Puzyn, Tomasz; Roy, Kunal
2018-04-23
Alzheimer's disease (AD) is a multi-factorial disease, which can be simply outlined as an irreversible and progressive neurodegenerative disorder with an unclear root cause. It is a major cause of dementia in old aged people. In the present study, utilizing the structural and biological activity information of ligands for five important and mostly studied vital targets (i.e. cyclin-dependant kinase 5, β-secretase, monoamine oxidase B, glycogen synthase kinase 3β, acetylcholinesterase) that are believed to be effective against AD, we have developed five classification models using linear discriminant analysis (LDA) technique. Considering the importance of data curation, we have given more attention towards the chemical and biological data curation, which is a difficult task especially in case of big data-sets. Thus, to ease the curation process we have designed Konstanz Information Miner (KNIME) workflows, which are made available at http://teqip.jdvu.ac.in/QSAR_Tools/ . The developed models were appropriately validated based on the predictions for experiment derived data from test sets, as well as true external set compounds including known multi-target compounds. The domain of applicability for each classification model was checked based on a confidence estimation approach. Further, these validated models were employed for screening of natural compounds collected from the InterBioScreen natural database ( https://www.ibscreen.com/natural-compounds ). Further, the natural compounds that were categorized as 'actives' in at least two classification models out of five developed models were considered as multi-target leads, and these compounds were further screened using the drug-like filter, molecular docking technique and then thoroughly analyzed using molecular dynamics studies. Finally, the most potential multi-target natural compounds against AD are suggested.
Sharma, Megha; Sharma, Kusum; Sharma, Aman; Gupta, Nalini; Rajwanshi, Arvind
2016-09-01
Tuberculous lymphadenitis (TBLA), the most common presentation of tuberculosis, poses a significant diagnostic challenge in the developing countries. Timely, accurate and cost-effective diagnosis can decrease the high morbidity associated with TBLA especially in resource-poor high-endemic regions. The loop-mediated isothermal amplification assay (LAMP), using two targets, was evaluated for the diagnosis of TBLA. LAMP assay using 3 sets of primers (each for IS6110 and MPB64) was performed on 170 fine needle aspiration samples (85 confirmed, 35 suspected, 50 control cases of TBLA). Results were compared against IS6110 PCR, cytology, culture and smear. The overall sensitivity and specificity of LAMP assay, using multi-targeted approach, was 90% and 100% respectively in diagnosing TBLA. The sensitivity of multi-targeted LAMP, only MPB64 LAMP, only IS6110 LAMP and IS6110 PCR was 91.7%, 89.4%, 84.7% and 75.2%, respectively among confirmed cases and 85.7%, 77.1%, 68.5% and 60%, respectively among suspected cases of TBLA. Additional 12/120 (10%) cases were detected using multi-targeted method. The multi-targeted LAMP, with its speedy and reliable results, is a potential diagnostic test for TBLA in low-resource countries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, Xiao-Bo; Li, Meng; Wang, Hui; Guo, Lan-Ping; Huang, Lu-Qi
2017-11-01
In literature, there are many information on the distribution of Chinese herbal medicine. Limited by the technical methods, the origin of Chinese herbal medicine or distribution of information in ancient literature were described roughly. It is one of the main objectives of the national census of Chinese medicine resources, which is the background information of the types and distribution of Chinese medicine resources in the region. According to the national Chinese medicine resource census technical specifications and pilot work experience, census team with "3S" technology, computer network technology, digital camera technology and other modern technology methods, can effectively collect the location information of traditional Chinese medicine resources. Detailed and specific location information, such as regional differences in resource endowment and similarity, biological characteristics and spatial distribution, the Chinese medicine resource census data access to the accuracy and objectivity evaluation work, provide technical support and data support. With the support of spatial information technology, based on location information, statistical summary and sharing of multi-source census data can be realized. The integration of traditional Chinese medicine resources and related basic data can be a spatial integration, aggregation and management of massive data, which can help for the scientific rules data mining of traditional Chinese medicine resources from the overall level and fully reveal its scientific connotation. Copyright© by the Chinese Pharmaceutical Association.
Malling, Bente; Mortensen, Lene; Bonderup, Thomas; Scherpbier, Albert; Ringsted, Charlotte
2009-12-10
Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time. Study participants were consultants responsible for postgraduate medical education at clinical departments. pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances. There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149). The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting.
Multi-targeted priming for genome-wide gene expression assays.
Adomas, Aleksandra B; Lopez-Giraldez, Francesc; Clark, Travis A; Wang, Zheng; Townsend, Jeffrey P
2010-08-17
Complementary approaches to assaying global gene expression are needed to assess gene expression in regions that are poorly assayed by current methodologies. A key component of nearly all gene expression assays is the reverse transcription of transcribed sequences that has traditionally been performed by priming the poly-A tails on many of the transcribed genes in eukaryotes with oligo-dT, or by priming RNA indiscriminately with random hexamers. We designed an algorithm to find common sequence motifs that were present within most protein-coding genes of Saccharomyces cerevisiae and of Neurospora crassa, but that were not present within their ribosomal RNA or transfer RNA genes. We then experimentally tested whether degenerately priming these motifs with multi-targeted primers improved the accuracy and completeness of transcriptomic assays. We discovered two multi-targeted primers that would prime a preponderance of genes in the genomes of Saccharomyces cerevisiae and Neurospora crassa while avoiding priming ribosomal RNA or transfer RNA. Examining the response of Saccharomyces cerevisiae to nitrogen deficiency and profiling Neurospora crassa early sexual development, we demonstrated that using multi-targeted primers in reverse transcription led to superior performance of microarray profiling and next-generation RNA tag sequencing. Priming with multi-targeted primers in addition to oligo-dT resulted in higher sensitivity, a larger number of well-measured genes and greater power to detect differences in gene expression. Our results provide the most complete and detailed expression profiles of the yeast nitrogen starvation response and N. crassa early sexual development to date. Furthermore, our multi-targeting priming methodology for genome-wide gene expression assays provides selective targeting of multiple sequences and counter-selection against undesirable sequences, facilitating a more complete and precise assay of the transcribed sequences within the genome.
NASA Technical Reports Server (NTRS)
Brooks, Colin; Bourgeau-Chavez, Laura; Endres, Sarah; Battaglia, Michael; Shuchman, Robert
2015-01-01
Primary Goal: Assist with the evaluation and measuring of wetlands hydroperiod at the PlumBrook Station using multi-source remote sensing data as part of a larger effort on projecting climate change-related impacts on the station's wetland ecosystems. MTRI expanded on the multi-source remote sensing capabilities to help estimate and measure hydroperiod and the relative soil moisture of wetlands at NASA's Plum Brook Station. Multi-source remote sensing capabilities are useful in estimating and measuring hydroperiod and relative soil moisture of wetlands. This is important as a changing regional climate has several potential risks for wetland ecosystem function. The year two analysis built on the first year of the project by acquiring and analyzing remote sensing data for additional dates and types of imagery, combined with focused field work. Five deliverables were planned and completed: 1) Show the relative length of hydroperiod using available remote sensing datasets 2) Date linked table of wetlands extent over time for all feasible non-forested wetlands 3) Utilize LIDAR data to measure topographic height above sea level of all wetlands, wetland to catchment area radio, slope of wetlands, and other useful variables 4) A demonstration of how analyzed results from multiple remote sensing data sources can help with wetlands vulnerability assessment 5) A MTRI style report summarizing year 2 results. This report serves as a descriptive summary of our completion of these our deliverables. Additionally, two formal meetings were held with Larry Liou and Amanda Sprinzl to provide project updates and receive direction on outputs. These were held on 2/26/15 and 9/17/15 at the Plum Brook Station. Principal Component Analysis (PCA) is a multivariate statistical technique used to identify dominant spatial and temporal backscatter signatures. PCA reduces the information contained in the temporal dataset to the first few new Principal Component (PC) images. Some advantages of PCA include the ability to filter out temporal autocorrelation and reduce speckle to the higher order PC images. A PCA was performed using ERDAS Imagine on a time series of PALSAR dates. Hydroperiod maps were created by separating the PALSAR dates into two date ranges, 2006-2008 and 2010, and performing an unsupervised classification on the PCAs.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Blackman, Gabrielle L.; Ostrander, Rick; Herman, Keith C.
2005-01-01
Although ADHD and depression are common comorbidities in youth, few studies have examined this particular clinical presentation. To address method bias limitations of previous research, this study uses multiple informants to compare the academic, social, and clinical functioning of children with ADHD, children with ADHD and depression, and…
ERIC Educational Resources Information Center
Sargeant, Joan; MacLeod, Tanya; Sinclair, Douglas; Power, Mary
2011-01-01
Introduction: The Colleges of Physicians and Surgeons of Alberta and Nova Scotia (CPSNS) use a standardized multisource feedback program, the Physician Achievement Review (PAR/NSPAR), to provide physicians with performance assessment data via questionnaires from medical colleagues, coworkers, and patients on 5 practice domains: consultation…
NASA Astrophysics Data System (ADS)
Luo, Qiu; Xin, Wu; Qiming, Xiong
2017-06-01
In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.
Li, Jian; Yu, Haiyang; Wang, Sijian; Wang, Wei; Chen, Qian; Ma, Yanmin; Zhang, Yi; Wang, Tao
2018-01-01
Imbalanced hepatic glucose homeostasis is one of the critical pathologic events in the development of metabolic syndromes (MSs). Therefore, regulation of imbalanced hepatic glucose homeostasis is important in drug development for MS treatment. In this review, we discuss the major targets that regulate hepatic glucose homeostasis in human physiologic and pathophysiologic processes, involving hepatic glucose uptake, glycolysis and glycogen synthesis, and summarize their changes in MSs. Recent literature suggests the necessity of multitarget drugs in the management of MS disorder for regulation of imbalanced glucose homeostasis in both experimental models and MS patients. Here, we highlight the potential bioactive compounds from natural products with medicinal or health care values, and focus on polypharmacologic and multitarget natural products with effects on various signaling pathways in hepatic glucose metabolism. This review shows the advantage and feasibility of discovering multicompound-multitarget drugs from natural products, and providing a new perspective of ways on drug and functional food development for MSs.
Wang, Sijian; Wang, Wei; Chen, Qian; Ma, Yanmin; Zhang, Yi; Wang, Tao
2018-01-01
Imbalanced hepatic glucose homeostasis is one of the critical pathologic events in the development of metabolic syndromes (MSs). Therefore, regulation of imbalanced hepatic glucose homeostasis is important in drug development for MS treatment. In this review, we discuss the major targets that regulate hepatic glucose homeostasis in human physiologic and pathophysiologic processes, involving hepatic glucose uptake, glycolysis and glycogen synthesis, and summarize their changes in MSs. Recent literature suggests the necessity of multitarget drugs in the management of MS disorder for regulation of imbalanced glucose homeostasis in both experimental models and MS patients. Here, we highlight the potential bioactive compounds from natural products with medicinal or health care values, and focus on polypharmacologic and multitarget natural products with effects on various signaling pathways in hepatic glucose metabolism. This review shows the advantage and feasibility of discovering multicompound–multitarget drugs from natural products, and providing a new perspective of ways on drug and functional food development for MSs. PMID:29391777
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Specialty-specific multi-source feedback: assuring validity, informing training.
Davies, Helena; Archer, Julian; Bateman, Adrian; Dewar, Sandra; Crossley, Jim; Grant, Janet; Southgate, Lesley
2008-10-01
The white paper 'Trust, Assurance and Safety: the Regulation of Health Professionals in the 21st Century' proposes a single, generic multi-source feedback (MSF) instrument in the UK. Multi-source feedback was proposed as part of the assessment programme for Year 1 specialty training in histopathology. An existing instrument was modified following blueprinting against the histopathology curriculum to establish content validity. Trainees were also assessed using an objective structured practical examination (OSPE). Factor analysis and correlation between trainees' OSPE performance and the MSF were used to explore validity. All 92 trainees participated and the assessor response rate was 93%. Reliability was acceptable with eight assessors (95% confidence interval 0.38). Factor analysis revealed two factors: 'generic' and 'histopathology'. Pearson correlation of MSF scores with OSPE performances was 0.48 (P = 0.001) and the histopathology factor correlated more highly (histopathology r = 0.54, generic r = 0.42; t = - 2.76, d.f. = 89, P < 0.01). Trainees scored least highly in relation to ability to use histopathology to solve clinical problems (mean = 4.39) and provision of good reports (mean = 4.39). Three of six doctors whose means were < 4.0 received free text comments about report writing. There were 83 forms with aggregate scores of < 4. Of these, 19.2% included comments about report writing. Specialty-specific MSF is feasible and achieves satisfactory reliability. The higher correlation of the 'histopathology' factor with the OSPE supports validity. This paper highlights the importance of validating an MSF instrument within the specialty-specific context as, in addition to assuring content validity, the PATH-SPRAT (Histopathology-Sheffield Peer Review Assessment Tool) also demonstrates the potential to inform training as part of a quality improvement model.
NASA Astrophysics Data System (ADS)
Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.
2016-09-01
Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel
2013-01-01
Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.
Using multilevel, multisource needs assessment data for planning community interventions.
Levy, Susan R; Anderson, Emily E; Issel, L Michele; Willis, Marilyn A; Dancy, Barbara L; Jacobson, Kristin M; Fleming, Shirley G; Copper, Elizabeth S; Berrios, Nerida M; Sciammarella, Esther; Ochoa, Mónica; Hebert-Beirne, Jennifer
2004-01-01
African Americans and Latinos share higher rates of cardiovascular disease (CVD) and diabetes compared with Whites. These diseases have common risk factors that are amenable to primary and secondary prevention. The goal of the Chicago REACH 2010-Lawndale Health Promotion Project is to eliminate disparities related to CVD and diabetes experienced by African Americans and Latinos in two contiguous Chicago neighborhoods using a community-based prevention approach. This article shares findings from the Phase 1 participatory planning process and discusses the implications these findings and lessons learned may have for programs aiming to reduce health disparities in multiethnic communities. The triangulation of data sources from the planning phase enriched interpretation and led to more creative and feasible suggestions for programmatic interventions across the four levels of the ecological framework. Multisource data yielded useful information for program planning and a better understanding of the cultural differences and similarities between African Americans and Latinos.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
2009-01-01
Background Leadership courses and multi-source feedback are widely used developmental tools for leaders in health care. On this background we aimed to study the additional effect of a leadership course following a multi-source feedback procedure compared to multi-source feedback alone especially regarding development of leadership skills over time. Methods Study participants were consultants responsible for postgraduate medical education at clinical departments. Study design: pre-post measures with an intervention and control group. The intervention was participation in a seven-day leadership course. Scores of multi-source feedback from the consultants responsible for education and respondents (heads of department, consultants and doctors in specialist training) were collected before and one year after the intervention and analysed using Mann-Whitney's U-test and Multivariate analysis of variances. Results There were no differences in multi-source feedback scores at one year follow up compared to baseline measurements, either in the intervention or in the control group (p = 0.149). Conclusion The study indicates that a leadership course following a MSF procedure compared to MSF alone does not improve leadership skills of consultants responsible for education in clinical departments. Developing leadership skills takes time and the time frame of one year might have been too short to show improvement in leadership skills of consultants responsible for education. Further studies are needed to investigate if other combination of initiatives to develop leadership might have more impact in the clinical setting. PMID:20003311
Incomplete Multisource Transfer Learning.
Ding, Zhengming; Shao, Ming; Fu, Yun
2018-02-01
Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources.
Multisource, Phase-controlled Radiofrequency for Treatment of Skin Laxity
Moreno-Moraga, Javier; Muñoz, Estefania; Cornejo Navarro, Paloma
2011-01-01
Objective: The objective of this study was to analyze the correlation between degrees of clinical improvement and microscopic changes detected using confocal microscopy at the temperature gradients reached in patients treated for skin laxity with a phase-controlled, multisource radiofrequency system. Design and setting: Patients with skin laxity in the abdominal area were treated in six sessions with radiofrequency (the first 4 sessions were held at 2-week intervals and the 2 remaining sessions at 3-week intervals). Patients attended monitoring at 6, 9, and 12 months. Participants: 33 patients (all women). Measurements: The authors recorded the following: variations in weight, measurements of the contour of the treated area and control area, evaluation of clinical improvement by the clinician and by the patient, images taken using an infrared camera, temperature (before, immediately after, and 20 minutes after the procedure), and confocal microscopy images (before treatment and at 6, 9, and 12 months). The degree of clinical improvement was contrasted by two external observers (clinicians). The procedure was performed using a new phase-controlled, multipolar radiofrequency system. Results: The results reveal a greater degree of clinical improvement in patients with surface temperature increases greater than 11.5ºC at the end of the procedure and remaining greater than 4.5ºC 20 minutes later. These changes induced by radiofrequency were contrasted with the structural improvements observed at the dermal-epidermal junction using confocal microscopy. Changes are more intense and are statistically correlated with patients who show a greater degree of improvement and have higher temperature gradients at the end of the procedure and 20 minutes later. Conclusion: Monitoring and the use of parameters to evaluate end-point values in skin quality treatment by multisource, phased-controlled radiofrequency can help optimize aesthetic outcome. PMID:21278896
Development of a Multi-Target Contingency Management Intervention for HIV Positive Substance Users.
Stitzer, Maxine; Calsyn, Donald; Matheson, Timothy; Sorensen, James; Gooden, Lauren; Metsch, Lisa
2017-01-01
Contingency management (CM) interventions generally target a single behavior such as attendance or drug use. However, disease outcomes are mediated by complex chains of both healthy and interfering behaviors enacted over extended periods of time. This paper describes a novel multi-target contingency management (CM) program developed for use with HIV positive substance users enrolled in a CTN multi-site study (0049 Project HOPE). Participants were randomly assigned to usual care (referral to health care and SUD treatment) or 6-months strength-based patient navigation interventions with (PN+CM) or without (PN only) the CM program. Primary outcome of the trial was viral load suppression at 12-months post-randomization. Up to $1160 could be earned over 6 months under escalating schedules of reinforcement. Earnings were divided among eight CM targets; two PN-related (PN visits; paperwork completion; 26% of possible earnings), four health-related (HIV care visits, lab blood draw visits, medication check, viral load suppression; 47% of possible earnings) and two drug-use abatement (treatment entry; submission of drug negative UAs; 27% of earnings). The paper describes rationale for selection of targets, pay amounts and pay schedules. The CM program was compatible with and fully integrated into the PN intervention. The study design will allow comparison of behavioral and health outcomes for participants receiving PN with and without CM; results will inform future multi-target CM development. Copyright © 2016 Elsevier Inc. All rights reserved.
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information. PMID:25938760
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Liu, Zengkai; Liu, Yonghong; Shan, Hongkai; Cai, Baoping; Huang, Qing
2015-01-01
This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD) method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs) are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Chadha, Navriti; Silakari, Om
2017-09-01
Diabetic complications is a complex metabolic disorder developed primarily due to prolonged hyperglycemia in the body. The complexity of the disease state as well as the unifying pathophysiology discussed in the literature reports exhibited that the use of multi-targeted agents with multiple complementary biological activities may offer promising therapy for the intervention of the disease over the single-target drugs. In the present study, novel thiazolidine-2,4-dione analogues were designed as multi-targeted agents implicated against the molecular pathways involved in diabetic complications using knowledge based as well as in-silico approaches such as pharmacophore mapping, molecular docking etc. The hit molecules were duly synthesized and biochemical estimation of these molecules against aldose reductase (ALR2), protein kinase Cβ (PKCβ) and poly (ADP-ribose) polymerase 1 (PARP-1) led to identification of compound 2 that showed good potency against PARP-1 and ALR2 enzymes. These positive results support the progress of a low cost multi-targeted agent with putative roles in diabetic complications. Copyright © 2017 Elsevier Inc. All rights reserved.
Advanced techniques for the storage and use of very large, heterogeneous spatial databases
NASA Technical Reports Server (NTRS)
Peuquet, Donna J.
1987-01-01
Progress is reported in the development of a prototype knowledge-based geographic information system. The overall purpose of this project is to investigate and demonstrate the use of advanced methods in order to greatly improve the capabilities of geographic information system technology in the handling of large, multi-source collections of spatial data in an efficient manner, and to make these collections of data more accessible and usable for the Earth scientist.
NASA Astrophysics Data System (ADS)
Albreht, Alen; Vovk, Irena; Mavri, Janez; Marco-Contelles, Jose; Ramsay, Rona R.
2018-05-01
Successful propargylamine drugs such as deprenyl inactivate monoamine oxidase (MAO), a target in multi-faceted approaches to prevent neurodegeneration in the aging population, but the chemical structure and mechanism of the irreversible inhibition are still debated. We characterized the covalent cyanine structure linking the multi-target propargylamine inhibitor ASS234 and the flavin adenine dinucleotide in MAO-A using a combination of ultra-high performance liquid chromatography, spectroscopy, mass spectrometry, and computational methods. The partial double bond character of the cyanine chain gives rise to 4 interconverting geometric isomers of the adduct which were chromatographically separated at low temperatures. The configuration of the cyanine linker governs adduct stability with segments of much higher flexibility and rigidity than previously hypothesized. The findings indicate the importance of intramolecular electrostatic interactions in the MAO binding site and provide key information relevant to incorporation of the propargyl moiety into novel multi-target drugs. Based on the structure, we propose a mechanism of MAO inactivation applicable to all propargylamine inhibitors.
Virtual target tracking (VTT) as applied to mobile satellite communication networks
NASA Astrophysics Data System (ADS)
Amoozegar, Farid
1999-08-01
Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.
Chatzidionysiou, Katerina; Hetland, Merete Lund; Frisell, Thomas; Di Giuseppe, Daniela; Hellgren, Karin; Glintborg, Bente; Nordström, Dan; Aaltonen, Kalle; Törmänen, Minna RK; Klami Kristianslund, Eirik; Kvien, Tore K; Provan, Sella A; Guðbjörnsson, Bjorn; Dreyer, Lene; Kristensen, Lars Erik; Jørgensen, Tanja Schjødt; Jacobsson, Lennart; Askling, Johan
2018-01-01
There are increasing needs for detailed real-world data on rheumatic diseases and their treatments. Clinical register data are essential sources of information that can be enriched through linkage to additional data sources such as national health data registers. Detailed analyses call for international collaborative observational research to increase the number of patients and the statistical power. Such linkages and collaborations come with legal, logistic and methodological challenges. In collaboration between registers of inflammatory arthritides in Sweden, Denmark, Norway, Finland and Iceland, we plan to enrich, harmonise and standardise individual data repositories to investigate analytical approaches to multisource data, to assess the viability of different logistical approaches to data protection and sharing and to perform collaborative studies on treatment effectiveness, safety and health-economic outcomes. This narrative review summarises the needs and potentials and the challenges that remain to be overcome in order to enable large-scale international collaborative research based on clinical and other types of data. PMID:29682328
Distributed Fusion in Sensor Networks with Information Genealogy
2011-06-28
image processing [2], acoustic and speech recognition [3], multitarget tracking [4], distributed fusion [5], and Bayesian inference [6-7]. For...Adaptation for Distant-Talking Speech Recognition." in Proc Acoustics. Speech , and Signal Processing, 2004 |4| Y Bar-Shalom and T 1-. Fortmann...used in speech recognition and other classification applications [8]. But their use in underwater mine classification is limited. In this paper, we
Detecting misinformation and knowledge conflicts in relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Jackobsen, Matthew; Riordan, Brian
2014-06-01
Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge extracted from various sources, including entities, relations, and events, intelligence analysts respond to commander's information requests, integrate facts into summaries about current situations, augment existing knowledge with inferred information, make predictions about the future, and develop action plans. However, information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source data processing, where there is much greater presence of intentional misinformation. In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using soft co-reference between entities and events in multiple sources. The conflict detection process maintains the uncertainty throughout all phases, providing full traceability and enabling incremental updates of the detection results as new knowledge or modification to previously analyzed information are obtained. Detected conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset, our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82% recall and 90% precision in realistic noise situation (15% of missing attributes).
Two-phase framework for near-optimal multi-target Lambert rendezvous
NASA Astrophysics Data System (ADS)
Bang, Jun; Ahn, Jaemyung
2018-03-01
This paper proposes a two-phase framework to obtain a near-optimal solution of multi-target Lambert rendezvous problem. The objective of the problem is to determine the minimum-cost rendezvous sequence and trajectories to visit a given set of targets within a maximum mission duration. The first phase solves a series of single-target rendezvous problems for all departure-arrival object pairs to generate the elementary solutions, which provides candidate rendezvous trajectories. The second phase formulates a variant of traveling salesman problem (TSP) using the elementary solutions prepared in the first phase and determines the final rendezvous sequence and trajectories of the multi-target rendezvous problem. The validity of the proposed optimization framework is demonstrated through an asteroid exploration case study.
Generalized information fusion and visualization using spatial voting and data modeling
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.
2013-05-01
We present a novel and innovative information fusion and visualization framework for multi-source intelligence (multiINT) data using Spatial Voting (SV) and Data Modeling. We describe how different sources of information can be converted into numerical form for further processing downstream, followed by a short description of how this information can be fused using the SV grid. As an illustrative example, we show the modeling of cyberspace as cyber layers for the purpose of tracking cyber personas. Finally we describe a path ahead for creating interactive agile networks through defender customized Cyber-cubes for network configuration and attack visualization.
NASA Astrophysics Data System (ADS)
Zhang, Kongwen; Hu, Baoxin; Robinson, Justin
2014-01-01
The emerald ash borer (EAB) poses a significant economic and environmental threat to ash trees in southern Ontario, Canada, and the northern states of the USA. It is critical that effective technologies are urgently developed to detect, monitor, and control the spread of EAB. This paper presents a methodology using multisourced data to predict potential infestations of EAB in the town of Oakville, Ontario, Canada. The information combined in this study includes remotely sensed data, such as high spatial resolution aerial imagery, commercial ground and airborne hyperspectral data, and Google Earth imagery, in addition to nonremotely sensed data, such as archived paper maps and documents. This wide range of data provides extensive information that can be used for early detection of EAB, yet their effective employment and use remain a significant challenge. A prediction function was developed to estimate the EAB infestation states of individual ash trees using three major attributes: leaf chlorophyll content, tree crown spatial pattern, and prior knowledge. Comparison between these predicted values and a ground-based survey demonstrated an overall accuracy of 62.5%, with 22.5% omission and 18.5% commission errors.
A multi-source feedback tool for measuring a subset of Pediatrics Milestones.
Schwartz, Alan; Margolis, Melissa J; Multerer, Sara; Haftel, Hilary M; Schumacher, Daniel J
2016-10-01
The Pediatrics Milestones Assessment Pilot employed a new multisource feedback (MSF) instrument to assess nine Pediatrics Milestones among interns and subinterns in the inpatient context. To report validity evidence for the MSF tool for informing milestone classification decisions. We obtained MSF instruments by different raters per learner per rotation. We present evidence for validity based on the unified validity framework. One hundred and ninety two interns and 41 subinterns at 18 Pediatrics residency programs received a total of 1084 MSF forms from faculty (40%), senior residents (34%), nurses (22%), and other staff (4%). Variance in ratings was associated primarily with rater (32%) and learner (22%). The milestone factor structure fit data better than simpler structures. In domains except professionalism, ratings by nurses were significantly lower than those by faculty and ratings by other staff were significantly higher. Ratings were higher when the rater observed the learner for longer periods and had a positive global opinion of the learner. Ratings of interns and subinterns did not differ, except for ratings by senior residents. MSF-based scales correlated with summative milestone scores. We obtain moderately reliable MSF ratings of interns and subinterns in the inpatient context to inform some milestone assignments.
Imputation for multisource data with comparison and assessment techniques
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
2017-12-27
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Imputation for multisource data with comparison and assessment techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Castro, Eduardo; Martínez-Ramón, Manel; Pearlson, Godfrey; Sui, Jing; Calhoun, Vince D.
2011-01-01
Pattern classification of brain imaging data can enable the automatic detection of differences in cognitive processes of specific groups of interest. Furthermore, it can also give neuroanatomical information related to the regions of the brain that are most relevant to detect these differences by means of feature selection procedures, which are also well-suited to deal with the high dimensionality of brain imaging data. This work proposes the application of recursive feature elimination using a machine learning algorithm based on composite kernels to the classification of healthy controls and patients with schizophrenia. This framework, which evaluates nonlinear relationships between voxels, analyzes whole-brain fMRI data from an auditory task experiment that is segmented into anatomical regions and recursively eliminates the uninformative ones based on their relevance estimates, thus yielding the set of most discriminative brain areas for group classification. The collected data was processed using two analysis methods: the general linear model (GLM) and independent component analysis (ICA). GLM spatial maps as well as ICA temporal lobe and default mode component maps were then input to the classifier. A mean classification accuracy of up to 95% estimated with a leave-two-out cross-validation procedure was achieved by doing multi-source data classification. In addition, it is shown that the classification accuracy rate obtained by using multi-source data surpasses that reached by using single-source data, hence showing that this algorithm takes advantage of the complimentary nature of GLM and ICA. PMID:21723948
Understanding the Influence of Emotions and Reflection upon Multi-Source Feedback Acceptance and Use
ERIC Educational Resources Information Center
Sargeant, Joan; Mann, Karen; Sinclair, Douglas; Van der Vleuten, Cees; Metsemakers, Job
2008-01-01
Introduction: Receiving negative performance feedback can elicit negative emotional reactions which can interfere with feedback acceptance and use. This study investigated emotional responses of family physicians' participating in a multi-source feedback (MSF) program, sources of these emotions, and their influence upon feedback acceptance and…
Antal, Péter; Kiszel, Petra Sz.; Gézsi, András; Hadadi, Éva; Virág, Viktor; Hajós, Gergely; Millinghoffer, András; Nagy, Adrienne; Kiss, András; Semsei, Ágnes F.; Temesi, Gergely; Melegh, Béla; Kisfali, Péter; Széll, Márta; Bikov, András; Gálffy, Gabriella; Tamási, Lilla; Falus, András; Szalai, Csaba
2012-01-01
Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called Bayesian network based Bayesian multilevel analysis of relevance (BN-BMLA). This method uses Bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the Bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated. With frequentist methods one SNP (rs3751464 in the FRMD6 gene) provided evidence for an association with asthma (OR = 1.43(1.2–1.8); p = 3×10−4). The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics. In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance. PMID:22432035
Naphthoquinone Derivatives Exert Their Antitrypanosomal Activity via a Multi-Target Mechanism
Mazet, Muriel; Perozzo, Remo; Bergamini, Christian; Prati, Federica; Fato, Romana; Lenaz, Giorgio; Capranico, Giovanni; Brun, Reto; Bakker, Barbara M.; Michels, Paul A. M.; Scapozza, Leonardo; Bolognesi, Maria Laura; Cavalli, Andrea
2013-01-01
Background and Methodology Recently, we reported on a new class of naphthoquinone derivatives showing a promising anti-trypanosomatid profile in cell-based experiments. The lead of this series (B6, 2-phenoxy-1,4-naphthoquinone) showed an ED50 of 80 nM against Trypanosoma brucei rhodesiense, and a selectivity index of 74 with respect to mammalian cells. A multitarget profile for this compound is easily conceivable, because quinones, as natural products, serve plants as potent defense chemicals with an intrinsic multifunctional mechanism of action. To disclose such a multitarget profile of B6, we exploited a chemical proteomics approach. Principal Findings A functionalized congener of B6 was immobilized on a solid matrix and used to isolate target proteins from Trypanosoma brucei lysates. Mass analysis delivered two enzymes, i.e. glycosomal glycerol kinase and glycosomal glyceraldehyde-3-phosphate dehydrogenase, as potential molecular targets for B6. Both enzymes were recombinantly expressed and purified, and used for chemical validation. Indeed, B6 was able to inhibit both enzymes with IC50 values in the micromolar range. The multifunctional profile was further characterized in experiments using permeabilized Trypanosoma brucei cells and mitochondrial cell fractions. It turned out that B6 was also able to generate oxygen radicals, a mechanism that may additionally contribute to its observed potent trypanocidal activity. Conclusions and Significance Overall, B6 showed a multitarget mechanism of action, which provides a molecular explanation of its promising anti-trypanosomatid activity. Furthermore, the forward chemical genetics approach here applied may be viable in the molecular characterization of novel multitarget ligands. PMID:23350008
Designing multi-targeted agents: An emerging anticancer drug discovery paradigm.
Fu, Rong-Geng; Sun, Yuan; Sheng, Wen-Bing; Liao, Duan-Fang
2017-08-18
The dominant paradigm in drug discovery is to design ligands with maximum selectivity to act on individual drug targets. With the target-based approach, many new chemical entities have been discovered, developed, and further approved as drugs. However, there are a large number of complex diseases such as cancer that cannot be effectively treated or cured only with one medicine to modulate the biological function of a single target. As simultaneous intervention of two (or multiple) cancer progression relevant targets has shown improved therapeutic efficacy, the innovation of multi-targeted drugs has become a promising and prevailing research topic and numerous multi-targeted anticancer agents are currently at various developmental stages. However, most multi-pharmacophore scaffolds are usually discovered by serendipity or screening, while rational design by combining existing pharmacophore scaffolds remains an enormous challenge. In this review, four types of multi-pharmacophore modes are discussed, and the examples from literature will be used to introduce attractive lead compounds with the capability of simultaneously interfering with different enzyme or signaling pathway of cancer progression, which will reveal the trends and insights to help the design of the next generation multi-targeted anticancer agents. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
A ranking method for the concurrent learning of compounds with various activity profiles.
Dörr, Alexander; Rosenbaum, Lars; Zell, Andreas
2015-01-01
In this study, we present a SVM-based ranking algorithm for the concurrent learning of compounds with different activity profiles and their varying prioritization. To this end, a specific labeling of each compound was elaborated in order to infer virtual screening models against multiple targets. We compared the method with several state-of-the-art SVM classification techniques that are capable of inferring multi-target screening models on three chemical data sets (cytochrome P450s, dehydrogenases, and a trypsin-like protease data set) containing three different biological targets each. The experiments show that ranking-based algorithms show an increased performance for single- and multi-target virtual screening. Moreover, compounds that do not completely fulfill the desired activity profile are still ranked higher than decoys or compounds with an entirely undesired profile, compared to other multi-target SVM methods. SVM-based ranking methods constitute a valuable approach for virtual screening in multi-target drug design. The utilization of such methods is most helpful when dealing with compounds with various activity profiles and the finding of many ligands with an already perfectly matching activity profile is not to be expected.
A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.
Zhao, Lei; Mi, Dong; Sun, Yeqing
2017-05-07
The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Research on precise modeling of buildings based on multi-source data fusion of air to ground
NASA Astrophysics Data System (ADS)
Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong
2016-03-01
Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.
An integrated multi-source energy harvester based on vibration and magnetic field energy
NASA Astrophysics Data System (ADS)
Hu, Zhengwen; Qiu, Jing; Wang, Xian; Gao, Yuan; Liu, Xin; Chang, Qijie; Long, Yibing; He, Xingduo
2018-05-01
In this paper, an integrated multi-source energy harvester (IMSEH) employing a special shaped cantilever beam and a piezoelectric transducer to convert vibration and magnetic field energy into electrical energy is presented. The electric output performance of the proposed IMSEH has been investigated. Compared to a traditional multi-source energy harvester (MSEH) or single source energy harvester (SSEH), the proposed IMSEH can simultaneously harvest vibration and magnetic field energy with an integrated structure and the electric output is greatly improved. When other conditions keep identical, the IMSEH can obtain high voltage of 12.8V. Remarkably, the proposed IMSEHs have great potential for its application in wireless sensor network.
Harth, Yoram
2015-03-01
In the last decade, Radiofrequency (RF) energy has proven to be safe and highly efficacious for face and neck skin tightening, body contouring, and cellulite reduction. In contrast to first-generation Monopolar/Bipolar and "X -Polar" RF systems which use one RF generator connected to one or more skin electrodes, multisource radiofrequency devices use six independent RF generators allowing efficient dermal heating to 52-55°C, with no pain or risk of other side effects. In this review, the basic science and clinical results of body contouring and cellulite treatment using multisource radiofrequency system (Endymed PRO, Endymed, Cesarea, Israel) will be discussed and analyzed. © 2015 Wiley Periodicals, Inc.
Development of Physical Therapy Practical Assessment System by Using Multisource Feedback
ERIC Educational Resources Information Center
Hengsomboon, Ninwisan; Pasiphol, Shotiga; Sujiva, Siridej
2017-01-01
The purposes of the research were (1) to develop the physical therapy practical assessment system by using the multisource feedback (MSF) approach and (2) to investigate the effectiveness of the implementation of the developed physical therapy practical assessment system. The development of physical therapy practical assessment system by using MSF…
ERIC Educational Resources Information Center
Roberts, Martin J.; Campbell, John L.; Richards, Suzanne H.; Wright, Christine
2013-01-01
Introduction: Multisource feedback (MSF) ratings provided by patients and colleagues are often poorly correlated with doctors' self-assessments. Doctors' reactions to feedback depend on its agreement with their own perceptions, but factors influencing self-other agreement in doctors' MSF ratings have received little attention. We aimed to identify…
NASA Astrophysics Data System (ADS)
Xie, Jiayu; Wang, Gongwen; Sha, Yazhou; Liu, Jiajun; Wen, Botao; Nie, Ming; Zhang, Shuai
2017-04-01
Integrating multi-source geoscience information (such as geology, geophysics, geochemistry, and remote sensing) using GIS mapping is one of the key topics and frontiers in quantitative geosciences for mineral exploration. GIS prospective mapping and three-dimensional (3D) modeling can be used not only to extract exploration criteria and delineate metallogenetic targets but also to provide important information for the quantitative assessment of mineral resources. This paper uses the Shangnan district of Shaanxi province (China) as a case study area. GIS mapping and potential granite-hydrothermal uranium targeting were conducted in the study area combining weights of evidence (WofE) and concentration-area (C-A) fractal methods with multi-source geoscience information. 3D deposit-scale modeling using GOCAD software was performed to validate the shapes and features of the potential targets at the subsurface. The research results show that: (1) the known deposits have potential zones at depth, and the 3D geological models can delineate surface or subsurface ore-forming features, which can be used to analyze the uncertainty of the shape and feature of prospectivity mapping at the subsurface; (2) single geochemistry anomalies or remote sensing anomalies at the surface require combining the depth exploration criteria of geophysics to identify potential targets; and (3) the single or sparse exploration criteria zone with few mineralization spots at the surface has high uncertainty in terms of the exploration target.
Kalash, Leen; Val, Cristina; Azuaje, Jhonny; Loza, María I; Svensson, Fredrik; Zoufir, Azedine; Mervin, Lewis; Ladds, Graham; Brea, José; Glen, Robert; Sotelo, Eddy; Bender, Andreas
2017-12-30
Compounds designed to display polypharmacology may have utility in treating complex diseases, where activity at multiple targets is required to produce a clinical effect. In particular, suitable compounds may be useful in treating neurodegenerative diseases by promoting neuronal survival in a synergistic manner via their multi-target activity at the adenosine A 1 and A 2A receptors (A 1 R and A 2A R) and phosphodiesterase 10A (PDE10A), which modulate intracellular cAMP levels. Hence, in this work we describe a computational method for the design of synthetically feasible ligands that bind to A 1 and A 2A receptors and inhibit phosphodiesterase 10A (PDE10A), involving a retrosynthetic approach employing in silico target prediction and docking, which may be generally applicable to multi-target compound design at several target classes. This approach has identified 2-aminopyridine-3-carbonitriles as the first multi-target ligands at A 1 R, A 2A R and PDE10A, by showing agreement between the ligand and structure based predictions at these targets. The series were synthesized via an efficient one-pot scheme and validated pharmacologically as A 1 R/A 2A R-PDE10A ligands, with IC 50 values of 2.4-10.0 μM at PDE10A and K i values of 34-294 nM at A 1 R and/or A 2A R. Furthermore, selectivity profiling of the synthesized 2-amino-pyridin-3-carbonitriles against other subtypes of both protein families showed that the multi-target ligand 8 exhibited a minimum of twofold selectivity over all tested off-targets. In addition, both compounds 8 and 16 exhibited the desired multi-target profile, which could be considered for further functional efficacy assessment, analog modification for the improvement of selectivity towards A 1 R, A 2A R and PDE10A collectively, and evaluation of their potential synergy in modulating cAMP levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ge, Y; Keall, P; Poulsen, P
Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable PTV margin reduction to minimize health tissue toxicity while remain tumor coverage when treating advanced disease with independently moving targets involved. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less
FuzzyFusion: an application architecture for multisource information fusion
NASA Astrophysics Data System (ADS)
Fox, Kevin L.; Henning, Ronda R.
2009-04-01
The correlation of information from disparate sources has long been an issue in data fusion research. Traditional data fusion addresses the correlation of information from sources as diverse as single-purpose sensors to all-source multi-media information. Information system vulnerability information is similar in its diversity of sources and content, and in the desire to draw a meaningful conclusion, namely, the security posture of the system under inspection. FuzzyFusionTM, A data fusion model that is being applied to the computer network operations domain is presented. This model has been successfully prototyped in an applied research environment and represents a next generation assurance tool for system and network security.
Jouhet, V; Defossez, G; Ingrand, P
2013-01-01
The aim of this study was to develop and evaluate a selection algorithm of relevant records for the notification of incident cases of cancer on the basis of the individual data available in a multi-source information system. This work was conducted on data for the year 2008 in the general cancer registry of Poitou-Charentes region (France). The selection algorithm hierarchizes information according to its level of relevance for tumoral topography and tumoral morphology independently. The selected data are combined to form composite records. These records are then grouped in respect with the notification rules of the International Agency for Research on Cancer for multiple primary cancers. The evaluation, based on recall, precision and F-measure confronted cases validated manually by the registry's physicians with tumours notified with and without records selection. The analysis involved 12,346 tumours validated among 11,971 individuals. The data used were hospital discharge data (104,474 records), pathology data (21,851 records), healthcare insurance data (7508 records) and cancer care centre's data (686 records). The selection algorithm permitted performances improvement for notification of tumour topography (F-measure 0.926 with vs. 0.857 without selection) and tumour morphology (F-measure 0.805 with vs. 0.750 without selection). These results show that selection of information according to its origin is efficient in reducing noise generated by imprecise coding. Further research is needed for solving the semantic problems relating to the integration of heterogeneous data and the use of non-structured information.
Information transfer in auditoria and room-acoustical quality.
Summers, Jason E
2013-04-01
It is hypothesized that room-acoustical quality correlates with the information-transfer rate. Auditoria are considered as multiple-input multiple-output communication channels and a theory of information-transfer is outlined that accounts for time-variant multipath, spatial hearing, and distributed directional sources. Source diversity and spatial hearing are shown to be the mechanisms through which multipath increases the information-transfer rate by overcoming finite spatial resolution. In addition to predictions that are confirmed by recent and historical findings, the theory provides explanations for the influence of factors such as musical repertoire and ensemble size on subjective preference and the influence of multisource, multichannel auralization on perceived realism.
NASA Astrophysics Data System (ADS)
Ren, Y.
2017-12-01
Context Land surface temperatures (LSTs) spatio-temporal distribution pattern of urban forests are influenced by many ecological factors; the identification of interaction between these factors can improve simulations and predictions of spatial patterns of urban cold islands. This quantitative research requires an integrated method that combines multiple sources data with spatial statistical analysis. Objectives The purpose of this study was to clarify urban forest LST influence interaction between anthropogenic activities and multiple ecological factors using cluster analysis of hot and cold spots and Geogdetector model. We introduced the hypothesis that anthropogenic activity interacts with certain ecological factors, and their combination influences urban forests LST. We also assumed that spatio-temporal distributions of urban forest LST should be similar to those of ecological factors and can be represented quantitatively. Methods We used Jinjiang as a representative city in China as a case study. Population density was employed to represent anthropogenic activity. We built up a multi-source data (forest inventory, digital elevation models (DEM), population, and remote sensing imagery) on a unified urban scale to support urban forest LST influence interaction research. Through a combination of spatial statistical analysis results, multi-source spatial data, and Geogdetector model, the interaction mechanisms of urban forest LST were revealed. Results Although different ecological factors have different influences on forest LST, in two periods with different hot spots and cold spots, the patch area and dominant tree species were the main factors contributing to LST clustering in urban forests. The interaction between anthropogenic activity and multiple ecological factors increased LST in urban forest stands, linearly and nonlinearly. Strong interactions between elevation and dominant species were generally observed and were prevalent in either hot or cold spots areas in different years. Conclusions In conclusion, a combination of spatial statistics and GeogDetector models should be effective for quantitatively evaluating interactive relationships among ecological factors, anthropogenic activity and LST.
Geometric Factors in Target Positioning and Tracking
2009-07-01
Shalom and X.R. Li, Multitarget-Multisensor Tracking: Principles and Techniques, YBS Publishing, Storrs, CT, 1995. [2] S. Blackman and R. Popoli, Design...Multitarget-Multisensor Tracking: Applications and Advances, Vol.2, Y. Bar- Shalom (Ed.), 325-392, Artech House, Norwood, MA, 1999. [10] B. Ristic...R. Yarlagadda, I. Ali , N. Al-Dhahir, and J. Hershey, “GPS GDOP Metric,” IEE Proc. Radar, Sonar Navig, 147(5), Oct. 2000. [14] A. Kelly
Multi-Target Regression via Robust Low-Rank Learning.
Zhen, Xiantong; Yu, Mengyang; He, Xiaofei; Li, Shuo
2018-02-01
Multi-target regression has recently regained great popularity due to its capability of simultaneously learning multiple relevant regression tasks and its wide applications in data mining, computer vision and medical image analysis, while great challenges arise from jointly handling inter-target correlations and input-output relationships. In this paper, we propose Multi-layer Multi-target Regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general framework via robust low-rank learning. Specifically, the MMR can explicitly encode inter-target correlations in a structure matrix by matrix elastic nets (MEN); the MMR can work in conjunction with the kernel trick to effectively disentangle highly complex nonlinear input-output relationships; the MMR can be efficiently solved by a new alternating optimization algorithm with guaranteed convergence. The MMR leverages the strength of kernel methods for nonlinear feature learning and the structural advantage of multi-layer learning architectures for inter-target correlation modeling. More importantly, it offers a new multi-layer learning paradigm for multi-target regression which is endowed with high generality, flexibility and expressive ability. Extensive experimental evaluation on 18 diverse real-world datasets demonstrates that our MMR can achieve consistently high performance and outperforms representative state-of-the-art algorithms, which shows its great effectiveness and generality for multivariate prediction.
NASA Astrophysics Data System (ADS)
Eberle, J.; Schmullius, C.
2017-12-01
Increasing archives of global satellite data present a new challenge to handle multi-source satellite data in a user-friendly way. Any user is confronted with different data formats and data access services. In addition the handling of time-series data is complex as an automated processing and execution of data processing steps is needed to supply the user with the desired product for a specific area of interest. In order to simplify the access to data archives of various satellite missions and to facilitate the subsequent processing, a regional data and processing middleware has been developed. The aim of this system is to provide standardized and web-based interfaces to multi-source time-series data for individual regions on Earth. For further use and analysis uniform data formats and data access services are provided. Interfaces to data archives of the sensor MODIS (NASA) as well as the satellites Landsat (USGS) and Sentinel (ESA) have been integrated in the middleware. Various scientific algorithms, such as the calculation of trends and breakpoints of time-series data, can be carried out on the preprocessed data on the basis of uniform data management. Jupyter Notebooks are linked to the data and further processing can be conducted directly on the server using Python and the statistical language R. In addition to accessing EO data, the middleware is also used as an intermediary between the user and external databases (e.g., Flickr, YouTube). Standardized web services as specified by OGC are provided for all tools of the middleware. Currently, the use of cloud services is being researched to bring algorithms to the data. As a thematic example, an operational monitoring of vegetation phenology is being implemented on the basis of various optical satellite data and validation data from the German Weather Service. Other examples demonstrate the monitoring of wetlands focusing on automated discovery and access of Landsat and Sentinel data for local areas.
van Ruitenbeek, Gemma M C; Zijlstra, Fred R H; Hülsheger, Ute R
2018-06-04
Purpose Participation in regular paid jobs positively affects mental and physical health of all people, including people with limited work capacities (LWC), people that are limited in their work capacity as a consequence of their disability, such as chronic mental illness, psychological or developmental disorder. For successful participation, a good fit between on one hand persons' capacities and on the other hand well-suited individual support and a suitable work environment is necessary in order to meet the demands of work. However, to date there is a striking paucity of validated measures that indicate the capability to work of people with LWC and that outline directions for support that facilitate the fit. Goal of the present study was therefore to develop such an instrument. Specifically, we adjusted measures of mental ability, conscientiousness, self-efficacy, and coping by simplifying the language level of these measures to make the scales accessible for people with low literacy. In order to validate these adjusted self-report and observer measures we conducted two studies, using multi-source, longitudinal data. Method Study 1 was a longitudinal multi-source study in which the newly developed instrument was administered twice to people with LWC and their significant other. We statistically tested the psychometric properties with respect to dimensionality and reliability. In Study 2, we collected new multi-source data and conducted a confirmatory factor analysis (CFA). Results Studies yielded a congruous factor structure in both samples, internally consistent measures with adequate content validity of scales and subscales, and high test-retest reliability. The CFA confirmed the factorial validity of the scales. Conclusion The adjusted self-report and the observer scales of mental ability, conscientiousness, self-efficacy, and coping are reliable measures that are well-suited to assess the work capability of people with LWC. Further research is needed to examine criterion-related validity with respect to the work demands such as work-behaviour and task performance.
A Multisource Approach to Assessing Child Maltreatment From Records, Caregivers, and Children.
Sierau, Susan; Brand, Tilman; Manly, Jody Todd; Schlesier-Michel, Andrea; Klein, Annette M; Andreas, Anna; Garzón, Leonhard Quintero; Keil, Jan; Binser, Martin J; von Klitzing, Kai; White, Lars O
2017-02-01
Practitioners and researchers alike face the challenge that different sources report inconsistent information regarding child maltreatment. The present study capitalizes on concordance and discordance between different sources and probes applicability of a multisource approach to data from three perspectives on maltreatment-Child Protection Services (CPS) records, caregivers, and children. The sample comprised 686 participants in early childhood (3- to 8-year-olds; n = 275) or late childhood/adolescence (9- to 16-year-olds; n = 411), 161 from two CPS sites and 525 from the community oversampled for psychosocial risk. We established three components within a factor-analytic approach: the shared variance between sources on presence of maltreatment (convergence), nonshared variance resulting from the child's own perspective, and the caregiver versus CPS perspective. The shared variance between sources was the strongest predictor of caregiver- and self-reported child symptoms. Child perspective and caregiver versus CPS perspective mainly added predictive strength of symptoms in late childhood/adolescence over and above convergence in the case of emotional maltreatment, lack of supervision, and physical abuse. By contrast, convergence almost fully accounted for child symptoms for failure to provide. Our results suggest consistent information from different sources reporting on maltreatment is, on average, the best indicator of child risk.
1998-05-22
NUMBER PR-98-1 T. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) Office of Naval Research Ballston Center Tower One One North Quincy...unlimited. 12 b. DISTRIBUTION CODE 19980601 082 13. ABSTRACT (Maximum 200 words) This research project is concerned with two distinct aspects of analysis...Environments With Application To Multitarget Tracking This research project is concerned with two distinct aspects of analysis and processing of sig
Development of a multitarget tracking system for paramecia
NASA Astrophysics Data System (ADS)
Yeh, Yu-Sing; Huang, Ke-Nung; Jen, Sun-Lon; Li, Yan-Chay; Young, Ming-Shing
2010-07-01
This investigation develops a multitarget tracking system for the motile protozoa, paramecium. The system can recognize, track, and record the orbit of swimming paramecia within a 4 mm diameter of a circular experimental pool. The proposed system is implemented using an optical microscope, a charge-coupled device camera, and a software tool, Laboratory Virtual Instrumentation Engineering Workbench (LABVIEW). An algorithm for processing the images and analyzing the traces of the paramecia is developed in LABVIEW. It focuses on extracting meaningful data in an experiment and recording them to elucidate the behavior of paramecia. The algorithm can also continue to track paramecia even if they are transposed or collide with each other. The experiment demonstrates that this multitarget tracking design can really track more than five paramecia and simultaneously yield meaningful data from the moving paramecia at a maximum speed of 1.7 mm/s.
[Application of network biology on study of traditional Chinese medicine].
Tian, Sai-Sai; Yang, Jian; Zhao, Jing; Zhang, Wei-Dong
2018-01-01
With the completion of the human genome project, people have gradually recognized that the functions of the biological system are fulfilled through network-type interaction between genes, proteins and small molecules, while complex diseases are caused by the imbalance of biological processes due to a number of gene expression disorders. These have contributed to the rise of the concept of the "multi-target" drug discovery. Treatment and diagnosis of traditional Chinese medicine are based on holism and syndrome differentiation. At the molecular level, traditional Chinese medicine is characterized by multi-component and multi-target prescriptions, which is expected to provide a reference for the development of multi-target drugs. This paper reviews the application of network biology in traditional Chinese medicine in six aspects, in expectation to provide a reference to the modernized study of traditional Chinese medicine. Copyright© by the Chinese Pharmaceutical Association.
A Noncontact FMCW Radar Sensor for Displacement Measurement in Structural Health Monitoring
Li, Cunlong; Chen, Weimin; Liu, Gang; Yan, Rong; Xu, Hengyi; Qi, Yi
2015-01-01
This paper investigates the Frequency Modulation Continuous Wave (FMCW) radar sensor for multi-target displacement measurement in Structural Health Monitoring (SHM). The principle of three-dimensional (3-D) displacement measurement of civil infrastructures is analyzed. The requirements of high-accuracy displacement and multi-target identification for the measuring sensors are discussed. The fundamental measuring principle of FMCW radar is presented with rigorous mathematical formulas, and further the multiple-target displacement measurement is analyzed and simulated. In addition, a FMCW radar prototype is designed and fabricated based on an off-the-shelf radar frontend and data acquisition (DAQ) card, and the displacement error induced by phase asynchronism is analyzed. The conducted outdoor experiments verify the feasibility of this sensing method applied to multi-target displacement measurement, and experimental results show that three targets located at different distances can be distinguished simultaneously with millimeter level accuracy. PMID:25822139
A noncontact FMCW radar sensor for displacement measurement in structural health monitoring.
Li, Cunlong; Chen, Weimin; Liu, Gang; Yan, Rong; Xu, Hengyi; Qi, Yi
2015-03-26
This paper investigates the Frequency Modulation Continuous Wave (FMCW) radar sensor for multi-target displacement measurement in Structural Health Monitoring (SHM). The principle of three-dimensional (3-D) displacement measurement of civil infrastructures is analyzed. The requirements of high-accuracy displacement and multi-target identification for the measuring sensors are discussed. The fundamental measuring principle of FMCW radar is presented with rigorous mathematical formulas, and further the multiple-target displacement measurement is analyzed and simulated. In addition, a FMCW radar prototype is designed and fabricated based on an off-the-shelf radar frontend and data acquisition (DAQ) card, and the displacement error induced by phase asynchronism is analyzed. The conducted outdoor experiments verify the feasibility of this sensing method applied to multi-target displacement measurement, and experimental results show that three targets located at different distances can be distinguished simultaneously with millimeter level accuracy.
Simoni, Elena; Bartolini, Manuela; Abu, Izuddin F; Blockley, Alix; Gotti, Cecilia; Bottegoni, Giovanni; Caporaso, Roberta; Bergamini, Christian; Andrisano, Vincenza; Cavalli, Andrea; Mellor, Ian R; Minarini, Anna; Rosini, Michela
2017-06-01
Alzheimer pathogenesis has been associated with a network of processes working simultaneously and synergistically. Over time, much interest has been focused on cholinergic transmission and its mutual interconnections with other active players of the disease. Besides the cholinesterase mainstay, the multifaceted interplay between nicotinic receptors and amyloid is actually considered to have a central role in neuroprotection. Thus, the multitarget drug-design strategy has emerged as a chance to face the disease network. By exploiting the multitarget approach, hybrid compounds have been synthesized and studied in vitro and in silico toward selected targets of the cholinergic and amyloidogenic pathways. The new molecules were able to target the cholinergic system, by joining direct nicotinic receptor stimulation to acetylcholinesterase inhibition, and to inhibit amyloid-β aggregation. The compounds emerged as a suitable starting point for a further optimization process.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino.
Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-01-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
NASA Astrophysics Data System (ADS)
Barlacchi, Gianni; de Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-10-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.
Multisource energy system project
NASA Astrophysics Data System (ADS)
Dawson, R. W.; Cowan, R. A.
1987-03-01
The mission of this project is to investigate methods of providing uninterruptible power to Army communications and navigational facilities, many of which have limited access or are located in rugged terrain. Two alternatives are currently available for deploying terrestrial stand-alone power systems: (1) conventional electric systems powered by diesel fuel, propane, or natural gas, and (2) alternative power systems using renewable energy sources such as solar photovoltaics (PV) or wind turbines (WT). The increased cost of fuels for conventional systems and the high cost of energy storage for single-source renewable energy systems have created interest in the hybrid or multisource energy system. This report will provide a summary of the first and second interim reports, final test results, and a user's guide for software that will assist in applying and designing multi-source energy systems.
A multi-source dataset of urban life in the city of Milan and the Province of Trentino
Barlacchi, Gianni; De Nadai, Marco; Larcher, Roberto; Casella, Antonio; Chitic, Cristiana; Torrisi, Giovanni; Antonelli, Fabrizio; Vespignani, Alessandro; Pentland, Alex; Lepri, Bruno
2015-01-01
The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others. PMID:26528394
[Quality by design approaches for pharmaceutical development and manufacturing of Chinese medicine].
Xu, Bing; Shi, Xin-Yuan; Wu, Zhi-Sheng; Zhang, Yan-Ling; Wang, Yun; Qiao, Yan-Jiang
2017-03-01
The pharmaceutical quality was built by design, formed in the manufacturing process and improved during the product's lifecycle. Based on the comprehensive literature review of pharmaceutical quality by design (QbD), the essential ideas and implementation strategies of pharmaceutical QbD were interpreted. Considering the complex nature of Chinese medicine, the "4H" model was innovated and proposed for implementing QbD in pharmaceutical development and industrial manufacture of Chinese medicine product. "4H" corresponds to the acronym of holistic design, holistic information analysis, holistic quality control, and holistic process optimization, which is consistent with the holistic concept of Chinese medicine theory. The holistic design aims at constructing both the quality problem space from the patient requirement and the quality solution space from multidisciplinary knowledge. Holistic information analysis emphasizes understanding the quality pattern of Chinese medicine by integrating and mining multisource data and information at a relatively high level. The batch-to-batch quality consistence and manufacturing system reliability can be realized by comprehensive application of inspective quality control, statistical quality control, predictive quality control and intelligent quality control strategies. Holistic process optimization is to improve the product quality and process capability during the product lifecycle management. The implementation of QbD is useful to eliminate the ecosystem contradictions lying in the pharmaceutical development and manufacturing process of Chinese medicine product, and helps guarantee the cost effectiveness. Copyright© by the Chinese Pharmaceutical Association.
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
Network-based drug discovery by integrating systems biology and computational technologies
Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua
2013-01-01
Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768
Senay, Gabriel B.; Velpuri, Naga Manohar; Alemu, Henok; Pervez, Shahriar Md; Asante, Kwabena O; Karuki, Gatarwa; Taa, Asefa; Angerer, Jay
2013-01-01
Timely information on the availability of water and forage is important for the sustainable development of pastoral regions. The lack of such information increases the dependence of pastoral communities on perennial sources, which often leads to competition and conflicts. The provision of timely information is a challenging task, especially due to the scarcity or non-existence of conventional station-based hydrometeorological networks in the remote pastoral regions. A multi-source water balance modelling approach driven by satellite data was used to operationally monitor daily water level fluctuations across the pastoral regions of northern Kenya and southern Ethiopia. Advanced Spaceborne Thermal Emission and Reflection Radiometer data were used for mapping and estimating the surface area of the waterholes. Satellite-based rainfall, modelled run-off and evapotranspiration data were used to model daily water level fluctuations. Mapping of waterholes was achieved with 97% accuracy. Validation of modelled water levels with field-installed gauge data demonstrated the ability of the model to capture the seasonal patterns and variations. Validation results indicate that the model explained 60% of the observed variability in water levels, with an average root-mean-squared error of 22%. Up-to-date information on rainfall, evaporation, scaled water depth and condition of the waterholes is made available daily in near-real time via the Internet (http://watermon.tamu.edu). Such information can be used by non-governmental organizations, governmental organizations and other stakeholders for early warning and decision making. This study demonstrated an integrated approach for establishing an operational waterhole monitoring system using multi-source satellite data and hydrologic modelling.
Crossley, James G M
2015-01-01
Nurse appraisal is well established in the Western world because of its obvious educational advantages. Appraisal works best with many sources of information on performance. Multisource feedback (MSF) is widely used in business and in other clinical disciplines to provide such information. It has also been incorporated into nursing appraisals, but, so far, none of the instruments in use for nurses has been validated. We set out to develop an instrument aligned with the UK Knowledge and Skills Framework (KSF) and to evaluate its reliability and feasibility across a wide hospital-based nursing population. The KSF framework provided a content template. Focus groups developed an instrument based on consensus. The instrument was administered to all the nursing staff in 2 large NHS hospitals forming a single trust in London, England. We used generalizability analysis to estimate reliability, response rates and unstructured interviews to evaluate feasibility, and factor structure and correlation studies to evaluate validity. On a voluntary basis the response rate was moderate (60%). A failure to engage with information technology and employment-related concerns were commonly cited as reasons for not responding. In this population, 11 responses provided a profile with sufficient reliability to inform appraisal (G = 0.7). Performance on the instrument was closely and significantly correlated with performance on a KSF questionnaire. This is the first contemporary psychometric evaluation of an MSF instrument for nurses. MSF appears to be as valid and reliable as an assessment method to inform appraisal in nurses as it is in other health professional groups. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
Lai, Michelle Mei Yee; Roberts, Noel; Martin, Jenepher
2014-09-17
Oral feedback from clinical educators is the traditional teaching method for improving clinical consultation skills in medical students. New approaches are needed to enhance this teaching model. Multisource feedback is a commonly used assessment method for learning among practising clinicians, but this assessment has not been explored rigorously in medical student education. This study seeks to evaluate if additional feedback on patient satisfaction improves medical student performance. The Patient Teaching Associate (PTA) Feedback Study is a single site randomized controlled, double-blinded trial with two parallel groups.An after-hours general practitioner clinic in Victoria, Australia, is adapted as a teaching clinic during the day. Medical students from two universities in their first clinical year participate in six simulated clinical consultations with ambulatory patient volunteers living with chronic illness. Eligible students will be randomized in equal proportions to receive patient satisfaction score feedback with the usual multisource feedback and the usual multisource feedback alone as control. Block randomization will be performed. We will assess patient satisfaction and consultation performance outcomes at baseline and after one semester and will compare any change in mean scores at the last session from that at baseline. We will model data using regression analysis to determine any differences between intervention and control groups. Full ethical approval has been obtained for the study. This trial will comply with CONSORT guidelines and we will disseminate data at conferences and in peer-reviewed journals. This is the first proposed trial to determine whether consumer feedback enhances the use of multisource feedback in medical student education, and to assess the value of multisource feedback in teaching and learning about the management of ambulatory patients living with chronic conditions. Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12613001055796.
Multisource feedback analysis of pediatric outpatient teaching
2013-01-01
Background This study aims to evaluate the outpatient communication skills of medical students via multisource feedback, which may be useful to map future directions in improving physician-patient communication. Methods Family respondents of patients, a nurse, a clinical teacher, and a research assistant evaluated video-recorded medical students’ interactions with outpatients by using multisource feedback questionnaires; students also assessed their own skills. The questionnaire was answered based on the video-recorded interactions between outpatients and the medical students. Results A total of 60 family respondents of the 60 patients completed the questionnaires, 58 (96.7%) of them agreed with the video recording. Two reasons for reluctance were “personal privacy” issues and “simply disagree” with the video recording. The average satisfaction score of the 58 students was 85.1 points, indicating students’ performance was in the category between satisfied and very satisfied. The family respondents were most satisfied with the “teacher”s attitude,“ followed by ”teaching quality”. In contrast, the family respondents were least satisfied with “being open to questions”. Among the 6 assessment domains of communication skills, the students scored highest on “explaining” and lowest on “giving recommendations”. In the detailed assessment by family respondents, the students scored lowest on “asking about life/school burden”. In the multisource analysis, the nurses’ mean score was much higher and the students’ mean self-assessment score was lower than the average scores on all domains. Conclusion The willingness and satisfaction of family respondents were high in this study. Students scored the lowest on giving recommendations to patients. Multisource feedback with video recording is useful in providing more accurate evaluation of students’ communication competence and in identifying the areas of communication that require enhancement. PMID:24180615
Systems biology approaches and tools for analysis of interactomes and multi-target drugs.
Schrattenholz, André; Groebe, Karlfried; Soskic, Vukic
2010-01-01
Systems biology is essentially a proteomic and epigenetic exercise because the relatively condensed information of genomes unfolds on the level of proteins. The flexibility of cellular architectures is not only mediated by a dazzling number of proteinaceous species but moreover by the kinetics of their molecular changes: The time scales of posttranslational modifications range from milliseconds to years. The genetic framework of an organism only provides the blue print of protein embodiments which are constantly shaped by external input. Indeed, posttranslational modifications of proteins represent the scope and velocity of these inputs and fulfil the requirements of integration of external spatiotemporal signal transduction inside an organism. The optimization of biochemical networks for this type of information processing and storage results in chemically extremely fine tuned molecular entities. The huge dynamic range of concentrations, the chemical diversity and the necessity of synchronisation of complex protein expression patterns pose the major challenge of systemic analysis of biological models. One further message is that many of the key reactions in living systems are essentially based on interactions of moderate affinities and moderate selectivities. This principle is responsible for the enormous flexibility and redundancy of cellular circuitries. In complex disorders such as cancer or neurodegenerative diseases, which initially appear to be rooted in relatively subtle dysfunctions of multimodal physiologic pathways, drug discovery programs based on the concept of high affinity/high specificity compounds ("one-target, one-disease"), which has been dominating the pharmaceutical industry for a long time, increasingly turn out to be unsuccessful. Despite improvements in rational drug design and high throughput screening methods, the number of novel, single-target drugs fell much behind expectations during the past decade, and the treatment of "complex diseases" remains a most pressing medical need. Currently, a change of paradigm can be observed with regard to a new interest in agents that modulate multiple targets simultaneously, essentially "dirty drugs." Targeting cellular function as a system rather than on the level of the single target, significantly increases the size of the drugable proteome and is expected to introduce novel classes of multi-target drugs with fewer adverse effects and toxicity. Multiple target approaches have recently been used to design medications against atherosclerosis, cancer, depression, psychosis and neurodegenerative diseases. A focussed approach towards "systemic" drugs will certainly require the development of novel computational and mathematical concepts for appropriate modelling of complex data. But the key is the extraction of relevant molecular information from biological systems by implementing rigid statistical procedures to differential proteomic analytics.
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-01-01
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone. PMID:26266764
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-08-12
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone.
Ma, X H; Wang, R; Tan, C Y; Jiang, Y Y; Lu, T; Rao, H B; Li, X Y; Go, M L; Low, B C; Chen, Y Z
2010-10-04
Multitarget agents have been increasingly explored for enhancing efficacy and reducing countertarget activities and toxicities. Efficient virtual screening (VS) tools for searching selective multitarget agents are desired. Combinatorial support vector machines (C-SVM) were tested as VS tools for searching dual-inhibitors of 11 combinations of 9 anticancer kinase targets (EGFR, VEGFR, PDGFR, Src, FGFR, Lck, CDK1, CDK2, GSK3). C-SVM trained on 233-1,316 non-dual-inhibitors correctly identified 26.8%-57.3% (majority >36%) of the 56-230 intra-kinase-group dual-inhibitors (equivalent to the 50-70% yields of two independent individual target VS tools), and 12.2% of the 41 inter-kinase-group dual-inhibitors. C-SVM were fairly selective in misidentifying as dual-inhibitors 3.7%-48.1% (majority <20%) of the 233-1,316 non-dual-inhibitors of the same kinase pairs and 0.98%-4.77% of the 3,971-5,180 inhibitors of other kinases. C-SVM produced low false-hit rates in misidentifying as dual-inhibitors 1,746-4,817 (0.013%-0.036%) of the 13.56 M PubChem compounds, 12-175 (0.007%-0.104%) of the 168 K MDDR compounds, and 0-84 (0.0%-2.9%) of the 19,495-38,483 MDDR compounds similar to the known dual-inhibitors. C-SVM was compared to other VS methods Surflex-Dock, DOCK Blaster, kNN and PNN against the same sets of kinase inhibitors and the full set or subset of the 1.02 M Zinc clean-leads data set. C-SVM produced comparable dual-inhibitor yields, slightly better false-hit rates for kinase inhibitors, and significantly lower false-hit rates for the Zinc clean-leads data set. Combinatorial SVM showed promising potential for searching selective multitarget agents against intra-kinase-group kinases without explicit knowledge of multitarget agents.
The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework
NASA Astrophysics Data System (ADS)
Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.
2016-12-01
The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During the 6th Session of the UN-GGIM in August 2016 the role of DGGS in the context of the GSGF was formally acknowledged. This paper proposes to highlight the synergies and role of DGGS in the Global Statistical Geospatial Framework and to show examples of the use of DGGS to combine geospatial statistics with traditional geoscientific data.
ERIC Educational Resources Information Center
Burns, G. Leonard; Desmul, Chris; Walsh, James A.; Silpakit, Chatchawan; Ussahawanitchakit, Phapruke
2009-01-01
Confirmatory factor analysis was used with a multitrait (attention-deficit/hyperactivity disorder-inattention, attention-deficit/hyperactivity disorder-hyperactivity/impulsivity, oppositional defiant disorder toward adults, academic competence, and social competence) by multisource (mothers and fathers) matrix to test the invariance and…
Ng, Kok-Yee; Koh, Christine; Ang, Soon; Kennedy, Jeffrey C; Chan, Kim-Yin
2011-09-01
This study extends multisource feedback research by assessing the effects of rater source and raters' cultural value orientations on rating bias (leniency and halo). Using a motivational perspective of performance appraisal, the authors posit that subordinate raters followed by peers will exhibit more rating bias than superiors. More important, given that multisource feedback systems were premised on low power distance and individualistic cultural assumptions, the authors expect raters' power distance and individualism-collectivism orientations to moderate the effects of rater source on rating bias. Hierarchical linear modeling on data collected from 1,447 superiors, peers, and subordinates who provided developmental feedback to 172 military officers show that (a) subordinates exhibit the most rating leniency, followed by peers and superiors; (b) subordinates demonstrate more halo than superiors and peers, whereas superiors and peers do not differ; (c) the effects of power distance on leniency and halo are strongest for subordinates than for peers and superiors; (d) the effects of collectivism on leniency were stronger for subordinates and peers than for superiors; effects on halo were stronger for subordinates than superiors, but these effects did not differ for subordinates and peers. The present findings highlight the role of raters' cultural values in multisource feedback ratings. PsycINFO Database Record (c) 2011 APA, all rights reserved
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
NASA Astrophysics Data System (ADS)
Han, P.; Long, D.
2017-12-01
Snow water equivalent (SWE) and total water storage (TWS) changes are important hydrological state variables over cryospheric regions, such as China's Upper Yangtze River (UYR) basin. Accurate simulation of these two state variables plays a critical role in understanding hydrological processes over this region and, in turn, benefits water resource management, hydropower development, and ecological integrity over the lower reaches of the Yangtze River, one of the largest rivers globally. In this study, an improved CREST model coupled with a snow and glacier melting module was used to simulate SWE and TWS changes over the UYR, and to quantify contributions of snow and glacier meltwater to the total runoff. Forcing, calibration, and validation data are mainly from multi-source remote sensing observations, including satellite-based precipitation estimates, passive microwave remote sensing-based SWE, and GRACE-derived TWS changes, along with streamflow measurements at the Zhimenda gauging station. Results show that multi-source remote sensing information can be extremely valuable in model forcing, calibration, and validation over the poorly gauged region. The simulated SWE and TWS changes and the observed counterparts are highly consistent, showing NSE coefficients higher than 0.8. The results also show that the contributions of snow and glacier meltwater to the total runoff are 8% and 6%, respectively, during the period 2003‒2014, which is an important source of runoff. Moreover, from this study, the TWS is found to increase at a rate of 5 mm/a ( 0.72 Gt/a) for the period 2003‒2014. The snow melting module may overestimate SWE for high precipitation events and was improved in this study. Key words: CREST model; Remote Sensing; Melting model; Source Region of the Yangtze River
Modelling and Characterisation of Detection Models in WAMI for Handling Negative Information
2014-02-01
behaviour of the multi-stage detectors used in LoFT. This model is then used in a Probabilistic Hypothesis Density Filter (PHD). Unlike most multitarget...Therefore, we decided to use machine learning techniques which could model — and pre- dict — the behaviour of the detectors in LoFT. Because we are using...on feature detectors [8], motion models [13] and descriptor and template adaptation [9]. 2.3.2 State Model The state space of LoFT is defined in 2D
Horizontal Estimation and Information Fusion in Multitarget and Multisensor Environments
1987-09-01
provided needed inspirations. Special thanks are due to Distinguished Professor G . J. Thaler, Professor R . Panholzer, Professor N. F. Schneidewind, and...Guidance McGraw Hill, pp. 338-340, 1964. 31. Battin, R . H., and Levine, G . M., A22lication of Kalman Filtering Techniaues in The Aoollo Program, in Theory...FL.. pp. 171 -175, Dec. 197 1. 43. Singer, R . A., Sea R . G ., and Housewright K. B., Derivation and Evaluation of Imoroved Tracking Filters for Use in
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers
Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin
2018-01-01
In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348
PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.
Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao
2015-11-06
Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.
NASA Astrophysics Data System (ADS)
Guan, Yan-Qing; Zheng, Zhe; Huang, Zheng; Li, Zhibin; Niu, Shuiqin; Liu, Jun-Ming
2014-05-01
Nanomagnetic materials offer exciting avenues for advancing cancer therapies. Most researches have focused on efficient delivery of drugs in the body by incorporating various drug molecules onto the surface of nanomagnetic particles. The challenge is how to synthesize low toxic nanocarriers with multi-target drug loading. The cancer cell death mechanisms associated with those nanocarriers remain unclear either. Following the cell biology mechanisms, we develop a liquid photo-immobilization approach to attach doxorubicin, folic acid, tumor necrosis factor-α, and interferon-γ onto the oleic acid molecules coated Fe3O4 magnetic nanoparticles to prepare a kind of novel inner/outer controlled multi-target magnetic nanoparticle drug carrier. In this work, this approach is demonstrated by a variety of structural and biomedical characterizations, addressing the anti-cancer effects in vivo and in vitro on the HeLa, and it is highly efficient and powerful in treating cancer cells in a valuable programmed cell death mechanism for overcoming drug resistance.
Identification and characterization of carprofen as a multi-target FAAH/COX inhibitor
Favia, Angelo D.; Habrant, Damien; Scarpelli, Rita; Migliore, Marco; Albani, Clara; Bertozzi, Sine Mandrup; Dionisi, Mauro; Tarozzo, Glauco; Piomelli, Daniele; Cavalli, Andrea; De Vivo, Marco
2013-01-01
Pain and inflammation are major therapeutic areas for drug discovery. Current drugs for these pathologies have limited efficacy, however, and often cause a number of unwanted side effects. In the present study, we identify the non-steroid anti-inflammatory drug, carprofen, as a multi-target-directed ligand that simultaneously inhibits cyclooxygenase-1 (COX-1), COX-2 and fatty acid amide hydrolase (FAAH). Additionally, we synthesized and tested several racemic derivatives of carprofen, sharing this multi-target activity. This may result in improved analgesic efficacy and reduced side effects (Naidu, et al (2009) J Pharmacol Exp Ther 329, 48-56; Fowler, C.J. et al. (2012) J Enzym Inhib Med Chem Jan 6; Sasso, et al (2012) Pharmacol Res 65, 553). The new compounds are among the most potent multi-target FAAH/COXs inhibitors reported so far in the literature, and thus may represent promising starting points for the discovery of new analgesic and anti-inflammatory drugs. PMID:23043222
Impact of workplace based assessment on doctors' education and performance: a systematic review.
Miller, Alice; Archer, Julian
2010-09-24
To investigate the literature for evidence that workplace based assessment affects doctors' education and performance. Systematic review. The primary data sources were the databases Journals@Ovid, Medline, Embase, CINAHL, PsycINFO, and ERIC. Evidence based reviews (Bandolier, Cochrane Library, DARE, HTA Database, and NHS EED) were accessed and searched via the Health Information Resources website. Reference lists of relevant studies and bibliographies of review articles were also searched. Review methods Studies of any design that attempted to evaluate either the educational impact of workplace based assessment, or the effect of workplace based assessment on doctors' performance, were included. Studies were excluded if the sampled population was non-medical or the study was performed with medical students. Review articles, commentaries, and letters were also excluded. The final exclusion criterion was the use of simulated patients or models rather than real life clinical encounters. Sixteen studies were included. Fifteen of these were non-comparative descriptive or observational studies; the other was a randomised controlled trial. Study quality was mixed. Eight studies examined multisource feedback with mixed results; most doctors felt that multisource feedback had educational value, although the evidence for practice change was conflicting. Some junior doctors and surgeons displayed little willingness to change in response to multisource feedback, whereas family physicians might be more prepared to initiate change. Performance changes were more likely to occur when feedback was credible and accurate or when coaching was provided to help subjects identify their strengths and weaknesses. Four studies examined the mini-clinical evaluation exercise, one looked at direct observation of procedural skills, and three were concerned with multiple assessment methods: all these studies reported positive results for the educational impact of workplace based assessment tools. However, there was no objective evidence of improved performance with these tools. Considering the emphasis placed on workplace based assessment as a method of formative performance assessment, there are few published articles exploring its impact on doctors' education and performance. This review shows that multisource feedback can lead to performance improvement, although individual factors, the context of the feedback, and the presence of facilitation have a profound effect on the response. There is no evidence that alternative workplace based assessment tools (mini-clinical evaluation exercise, direct observation of procedural skills, and case based discussion) lead to improvement in performance, although subjective reports on their educational impact are positive.
Tarkang, Protus Arrey; Appiah-Opong, Regina; Ofori, Michael F; Ayong, Lawrence S; Nyarko, Alexander K
2016-01-01
There is an urgent need for new anti-malaria drugs with broad therapeutic potential and novel mode of action, for effective treatment and to overcome emerging drug resistance. Plant-derived anti-malarials remain a significant source of bioactive molecules in this regard. The multicomponent formulation forms the basis of phytotherapy. Mechanistic reasons for the poly-pharmacological effects of plants constitute increased bioavailability, interference with cellular transport processes, activation of pro-drugs/deactivation of active compounds to inactive metabolites and action of synergistic partners at different points of the same signaling cascade. These effects are known as the multi-target concept. However, due to the intrinsic complexity of natural products-based drug discovery, there is need to rethink the approaches toward understanding their therapeutic effect. This review discusses the multi-target phytotherapeutic concept and its application in biomarker identification using the modified reverse pharmacology - systems biology approach. Considerations include the generation of a product library, high throughput screening (HTS) techniques for efficacy and interaction assessment, High Performance Liquid Chromatography (HPLC)-based anti-malarial profiling and animal pharmacology. This approach is an integrated interdisciplinary implementation of tailored technology platforms coupled to miniaturized biological assays, to track and characterize the multi-target bioactive components of botanicals as well as identify potential biomarkers. While preserving biodiversity, this will serve as a primary step towards the development of standardized phytomedicines, as well as facilitate lead discovery for chemical prioritization and downstream clinical development.
Tonelli, Michele; Catto, Marco; Tasso, Bruno; Novelli, Federica; Canu, Caterina; Iusco, Giovanna; Pisani, Leonardo; Stradis, Angelo De; Denora, Nunzio; Sparatore, Anna; Boido, Vito; Carotti, Angelo; Sparatore, Fabio
2015-06-01
Multitarget therapeutic leads for Alzheimer's disease were designed on the models of compounds capable of maintaining or restoring cell protein homeostasis and of inhibiting β-amyloid (Aβ) oligomerization. Thirty-seven thioxanthen-9-one, xanthen-9-one, naphto- and anthraquinone derivatives were tested for the direct inhibition of Aβ(1-40) aggregation and for the inhibition of electric eel acetylcholinesterase (eeAChE) and horse serum butyrylcholinesterase (hsBChE). These compounds are characterized by basic side chains, mainly quinolizidinylalkyl moieties, linked to various bi- and tri-cyclic (hetero)aromatic systems. With very few exceptions, these compounds displayed inhibitory activity on both AChE and BChE and on the spontaneous aggregation of β-amyloid. In most cases, IC50 values were in the low micromolar and sub-micromolar range, but some compounds even reached nanomolar potency. The time course of amyloid aggregation in the presence of the most active derivative (IC50 =0.84 μM) revealed that these compounds might act as destabilizers of mature fibrils rather than mere inhibitors of fibrillization. Many compounds inhibited one or both cholinesterases and Aβ aggregation with similar potency, a fundamental requisite for the possible development of therapeutics exhibiting a multitarget mechanism of action. The described compounds thus represent interesting leads for the development of multitarget AD therapeutics. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yu, Miaoyu; Law, Samuel; Dang, Kien; Byrne, Niall
2016-04-01
Psychiatry as a field and undergraduate psychiatry education (UPE) specifically have historically been in the periphery of medicine in China, unlike the relatively central role they occupy in the West. During the current economic reform, Chinese undergraduate medical education (UME) is undergoing significant changes and standardization under the auspices of the national accreditation body. A comparative study, using Bereday's comparative education methodology and Feldmann's evaluative criteria as theoretical frameworks, to gain understanding of the differences and similarities between China and the West in terms of UPE can contribute to the UME reform, and specifically UPE development in China, and promote cross-cultural understanding. The authors employed multi-sourced information to perform a comparative study of UPE, using the University of Toronto as a representative of the western model and Guangxi Medical University, a typical program in China, as the Chinese counterpart. Key contrasts are numerous; highlights include the difference in age and level of education of the entrants to medical school, centrally vs. locally developed UPE curriculum, level of integration with the rest of medical education, visibility within the medical school, adequacy of teaching resources, amount of clinical learning experience, opportunity for supervision and mentoring, and methods of student assessment. Examination of the existing, multi-sourced information reveals some fundamental differences in the current UPE between the representative Chinese and western programs, reflecting historical, political, cultural, and socioeconomic circumstances of the respective settings. The current analyses show some areas worthy of further exploration to inform Chinese UPE reform. The current research is a practical beginning to the development of a deeper collaborative dialogue about psychiatry and its educational underpinnings between China and the West.
Crawling and walking infants encounter objects differently in a multi-target environment.
Dosso, Jill A; Boudreau, J Paul
2014-10-01
From birth, infants move their bodies in order to obtain information and stimulation from their environment. Exploratory movements are important for the development of an infant's understanding of the world and are well established as being key to cognitive advances. Newly acquired motor skills increase the potential actions available to the infant. However, the way that infants employ potential actions in environments with multiple potential targets is undescribed. The current work investigated the target object selections of infants across a range of self-produced locomotor experience (11- to 14-month-old crawlers and walkers). Infants repeatedly accessed objects among pairs of objects differing in both distance and preference status, some requiring locomotion. Overall, their object actions were found to be sensitive to object preference status; however, the role of object distance in shaping object encounters was moderated by movement status. Crawlers' actions appeared opportunistic and were biased towards nearby objects while walkers' actions appeared intentional and were independent of object position. Moreover, walkers' movements favoured preferred objects more strongly for children with higher levels of self-produced locomotion experience. The multi-target experimental situation used in this work parallels conditions faced by foraging organisms, and infants' behaviours were discussed with respect to optimal foraging theory. There is a complex interplay between infants' agency, locomotor experience, and environment in shaping their motor actions. Infants' movements, in turn, determine the information and experiences offered to infants by their micro-environment.
Sound Localization in Multisource Environments
2009-03-01
A total of 7 paid volunteer listeners (3 males and 4 females, 20-25 years of age ) par- ticipated in the experiment. All had normal hearing (i.e...effects of the loudspeaker frequency responses, and were then sent from an experimental control computer to a Mark of the Unicorn (MOTU 24 I/O) digital-to...after the overall multisource stimulus has been presented (the ’post-cue’ condition). 3.2 Methods 3.2.1 Listeners Eight listeners, ranging in age from
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
A beam optics study of a modular multi-source X-ray tube for novel computed tomography applications
NASA Astrophysics Data System (ADS)
Walker, Brandon J.; Radtke, Jeff; Chen, Guang-Hong; Eliceiri, Kevin W.; Mackie, Thomas R.
2017-10-01
A modular implementation of a scanning multi-source X-ray tube is designed for the increasing number of multi-source imaging applications in computed tomography (CT). An electron beam array coupled with an oscillating magnetic deflector is proposed as a means for producing an X-ray focal spot at any position along a line. The preliminary multi-source model includes three thermionic electron guns that are deflected in tandem by a slowly varying magnetic field and pulsed according to a scanning sequence that is dependent on the intended imaging application. Particle tracking simulations with particle dynamics analysis software demonstrate that three 100 keV electron beams are laterally swept a combined distance of 15 cm over a stationary target with an oscillating magnetic field of 102 G perpendicular to the beam axis. Beam modulation is accomplished using 25 μs pulse widths to a grid electrode with a reverse gate bias of -500 V and an extraction voltage of +1000 V. Projected focal spot diameters are approximately 1 mm for 138 mA electron beams and the stationary target stays within thermal limits for the 14 kW module. This concept could be used as a research platform for investigating high-speed stationary CT scanners, for lowering dose with virtual fan beam formation, for reducing scatter radiation in cone-beam CT, or for other industrial applications.
Chang, S; Wong, K W; Zhang, W; Zhang, Y
1999-08-10
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
NASA Astrophysics Data System (ADS)
Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin
1999-08-01
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
Linifanib--a multi-targeted receptor tyrosine kinase inhibitor and a low molecular weight gelator.
Marlow, Maria; Al-Ameedee, Mohammed; Smith, Thomas; Wheeler, Simon; Stocks, Michael J
2015-04-14
In this study we demonstrate that linifanib, a multi-targeted receptor tyrosine kinase inhibitor, with a key urea containing pharmacophore, self-assembles into a hydrogel in the presence of low amounts of solvent. We demonstrate the role of the urea functional group and that of fluorine substitution on the adjacent aromatic ring in promoting self-assembly. We have also shown that linifanib has superior mechanical strength to two structurally related analogues and hence increased potential for localisation at an injection site for drug delivery applications.
WE-DE-201-08: Multi-Source Rotating Shield Brachytherapy Apparatus for Prostate Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dadkhah, H; Wu, X; Kim, Y
Purpose: To introduce a novel multi-source rotating shield brachytherapy (RSBT) apparatus for the precise simultaneous angular and linear positioning of all partially-shielded 153Gd radiation sources in interstitial needles for treating prostate cancer. The mechanism is designed to lower the detrimental dose to healthy tissues, the urethra in particular, relative to conventional high-dose-rate brachytherapy (HDR-BT) techniques. Methods: Following needle implantation, the delivery system is docked to the patient template. Each needle is coupled to a multi-source afterloader catheter by a connector passing through a shaft. The shafts are rotated by translating a moving template between two stationary templates. Shaft walls asmore » well as moving template holes are threaded such that the resistive friction produced between the two parts exerts enough force on the shafts to bring about the rotation. Rotation of the shaft is then transmitted to the shielded source via several keys. Thus, shaft angular position is fully correlated with the position of the moving template. The catheter angles are simultaneously incremented throughout treatment as needed, and only a single 360° rotation of all catheters is needed for a full treatment. For each rotation angle, source depth in each needle is controlled by a multi-source afterloader, which is proposed as an array of belt-driven linear actuators, each of which drives a source wire. Results: Optimized treatment plans based on Monte Carlo dose calculations demonstrated RSBT with the proposed apparatus reduced urethral D{sub 1cc} below that of conventional HDR-BT by 35% for urethral dose gradient volume within 3 mm of the urethra surface. Treatment time to deliver 20 Gy with multi-source RSBT apparatus using nineteen 62.4 GBq {sup 153}Gd sources is 117 min. Conclusions: The proposed RSBT delivery apparatus in conjunction with multiple nitinol catheter-mounted platinum-shielded {sup 153}Gd sources enables a mechanically feasible urethra-sparing treatment technique for prostate cancer in a clinically reasonable timeframe.« less
SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, J; Gao, H
2015-06-15
Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of opticalmore » coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the acoustic data. Jing Feng and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Multi-Targeted Agents in Cancer Cell Chemosensitization: What We Learnt from Curcumin Thus Far.
Bordoloi, Devivasha; Roy, Nand K; Monisha, Javadi; Padmavathi, Ganesan; Kunnumakkara, Ajaikumar B
2016-01-01
Research over the past several years has developed many mono-targeted therapies for the prevention and treatment of cancer, but it still remains one of the fatal diseases in the world killing 8.2 million people annually. It has been well-established that development of chemoresistance in cancer cells against mono-targeted chemotherapeutic agents by modulation of multiple survival pathways is the major cause of failure of cancer chemotherapy. Therefore, inhibition of these pathways by non-toxic multi-targeted agents may have profoundly high potential in preventing drug resistance and sensitizing cancer cells to chemotherapeutic agents. To study the potential of curcumin, a multi-targeted natural compound, obtained from the plant Turmeric (Curcuma longa) in combination with standard chemotherapeutic agents to inhibit drug resistance and sensitize cancer cells to these agents based on available literature and patents. An extensive literature survey was performed in PubMed and Google for the chemosensitizing potential of curcumin in different cancers published so far and the patents published during 2014-2015. Our search resulted in many in vitro, in vivo and clinical reports signifying the chemosensitizing potential of curcumin in diverse cancers. There were 160 in vitro studies, 62 in vivo studies and 5 clinical studies. Moreover, 11 studies reported on hybrid curcumin: the next generation of curcumin based therapeutics. Also, 34 patents on curcumin's biological activity have been retrieved. Altogether, the present study reveals the enormous potential of curcumin, a natural, non-toxic, multi-targeted agent in overcoming drug resistance in cancer cells and sensitizing them to chemotherapeutic drugs.
Probabilistic objective functions for sensor management
NASA Astrophysics Data System (ADS)
Mahler, Ronald P. S.; Zajic, Tim R.
2004-08-01
This paper continues the investigation of a foundational and yet potentially practical basis for control-theoretic sensor management, using a comprehensive, intuitive, system-level Bayesian paradigm based on finite-set statistics (FISST). In this paper we report our most recent progress, focusing on multistep look-ahead -- i.e., allocation of sensor resources throughout an entire future time-window. We determine future sensor states in the time-window using a "probabilistically natural" sensor management objective function, the posterior expected number of targets (PENT). This objective function is constructed using a new "maxi-PIMS" optimization strategy that hedges against unknowable future observation-collections. PENT is used in conjuction with approximate multitarget filters: the probability hypothesis density (PHD) filter or the multi-hypothesis correlator (MHC) filter.
Iacovino, Juliette M.; Jackson, Joshua J.; Oltmanns, Thomas F.
2015-01-01
The current study examines mechanisms of racial differences in symptoms of paranoid personality disorder (PPD) in a sample of adults ages 55–64 from the St. Louis, MO area. Socioeconomic status (SES) and childhood trauma were tested as intervening variables in the association between race and PPD symptoms using structural equation modeling. PPD symptoms were modeled as a latent variable composed of items from the PPD scales of the Multi-Source Assessment of Personality Pathology self and informant reports and the Structured Interview for the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM–IV) Personality. Childhood trauma was measured using the Traumatic Life Events Questionnaire, and SES was a composite of parent education, participant education, and annual household income. Blacks exhibited higher levels of PPD symptoms across the 3 personality measures, reported significantly lower SES, and reported greater childhood trauma. The proposed model was a good fit to the data, and the effect of race on PPD symptoms operated mainly through SES. The indirect effect through SES was stronger for males. Findings suggest that racial differences in PPD symptoms are partly explained by problems more commonly experienced by Black individuals. PMID:24661172
Prediction With Dimension Reduction of Multiple Molecular Data Sources for Patient Survival.
Kaplan, Adam; Lock, Eric F
2017-01-01
Predictive modeling from high-dimensional genomic data is often preceded by a dimension reduction step, such as principal component analysis (PCA). However, the application of PCA is not straightforward for multisource data, wherein multiple sources of 'omics data measure different but related biological components. In this article, we use recent advances in the dimension reduction of multisource data for predictive modeling. In particular, we apply exploratory results from Joint and Individual Variation Explained (JIVE), an extension of PCA for multisource data, for prediction of differing response types. We conduct illustrative simulations to illustrate the practical advantages and interpretability of our approach. As an application example, we consider predicting survival for patients with glioblastoma multiforme from 3 data sources measuring messenger RNA expression, microRNA expression, and DNA methylation. We also introduce a method to estimate JIVE scores for new samples that were not used in the initial dimension reduction and study its theoretical properties; this method is implemented in the R package R.JIVE on CRAN, in the function jive.predict.
Fusion of multi-source remote sensing data for agriculture monitoring tasks
NASA Astrophysics Data System (ADS)
Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.
2016-12-01
Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.
Shadow detection of moving objects based on multisource information in Internet of things
NASA Astrophysics Data System (ADS)
Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian
2017-05-01
Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.
Bi-level Multi-Source Learning for Heterogeneous Block-wise Missing Data
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified “bi-level” learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. PMID:23988272
Bi-level multi-source learning for heterogeneous block-wise missing data.
Xiang, Shuo; Yuan, Lei; Fan, Wei; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-11-15
Bio-imaging technologies allow scientists to collect large amounts of high-dimensional data from multiple heterogeneous sources for many biomedical applications. In the study of Alzheimer's Disease (AD), neuroimaging data, gene/protein expression data, etc., are often analyzed together to improve predictive power. Joint learning from multiple complementary data sources is advantageous, but feature-pruning and data source selection are critical to learn interpretable models from high-dimensional data. Often, the data collected has block-wise missing entries. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), most subjects have MRI and genetic information, but only half have cerebrospinal fluid (CSF) measures, a different half has FDG-PET; only some have proteomic data. Here we propose how to effectively integrate information from multiple heterogeneous data sources when data is block-wise missing. We present a unified "bi-level" learning model for complete multi-source data, and extend it to incomplete data. Our major contributions are: (1) our proposed models unify feature-level and source-level analysis, including several existing feature learning approaches as special cases; (2) the model for incomplete data avoids imputing missing data and offers superior performance; it generalizes to other applications with block-wise missing data sources; (3) we present efficient optimization algorithms for modeling complete and incomplete data. We comprehensively evaluate the proposed models including all ADNI subjects with at least one of four data types at baseline: MRI, FDG-PET, CSF and proteomics. Our proposed models compare favorably with existing approaches. © 2013 Elsevier Inc. All rights reserved.
Processing multisource feedback during residency under the guidance of a non-medical coach
Eckenhausen, Marina A.W.; ten Cate, Olle
2018-01-01
Objectives The present study aimed to investigate residents’ preferences in dealing with personal multi-source feedback (MSF) reports with or without the support of a coach. Methods Residents employed for at least half a year in the study hospital were eligible to participate. All 43 residents opting to discuss their MSF report with a psychologist-coach before discussing results with the program director were included. Semi-structured interviews were conducted following individual coaching sessions. Qualitative and quantitative data were gathered using field notes. Results Seventy-four percent (n= 32) preferred sharing the MFS report always with a coach, 21% (n= 9) if either the feedback or the relationship with the program director was less favorable, and 5% (n=2) saw no difference between discussing with a coach or with the program director. In the final stage of training residents more often preferred the coach (82.6%, n=19) than in the first stages (65%, n=13). Reasons for discussing the report with a coach included her neutral and objective position, her expertise, and the open and safe context during the discussion. Conclusions Most residents preferred discussing multisource feedback results with a coach before their meeting with a program director, particularly if the results were negative. They appeared to struggle with the dual role of the program director (coaching and judging) and appreciated the expertise of a dedicated coach to navigate this confrontation. We encourage residency programs to consider offering residents neutral coaching when processing multisource feedback. PMID:29478041
Multisource Feedback in the Ambulatory Setting
Warm, Eric J.; Schauer, Daniel; Revis, Brian; Boex, James R.
2010-01-01
Background The Accreditation Council for Graduate Medical Education has mandated multisource feedback (MSF) in the ambulatory setting for internal medicine residents. Few published reports demonstrate actual MSF results for a residency class, and fewer still include clinical quality measures and knowledge-based testing performance in the data set. Methods Residents participating in a year-long group practice experience called the “long-block” received MSF that included self, peer, staff, attending physician, and patient evaluations, as well as concomitant clinical quality data and knowledge-based testing scores. Residents were given a rank for each data point compared with peers in the class, and these data were reviewed with the chief resident and program director over the course of the long-block. Results Multisource feedback identified residents who performed well on most measures compared with their peers (10%), residents who performed poorly on most measures compared with their peers (10%), and residents who performed well on some measures and poorly on others (80%). Each high-, intermediate-, and low-performing resident had a least one aspect of the MSF that was significantly lower than the other, and this served as the basis of formative feedback during the long-block. Conclusion Use of multi-source feedback in the ambulatory setting can identify high-, intermediate-, and low-performing residents and suggest specific formative feedback for each. More research needs to be done on the effect of such feedback, as well as the relationships between each of the components in the MSF data set. PMID:21975632
Rochais, Christophe; Lecoutey, Cédric; Gaven, Florence; Giannoni, Patrizia; Hamidouche, Katia; Hedou, Damien; Dubost, Emmanuelle; Genest, David; Yahiaoui, Samir; Freret, Thomas; Bouet, Valentine; Dauphin, François; Sopkova de Oliveira Santos, Jana; Ballandonne, Céline; Corvaisier, Sophie; Malzert-Fréon, Aurélie; Legay, Remi; Boulouard, Michel; Claeysen, Sylvie; Dallemagne, Patrick
2015-04-09
In this work, we describe the synthesis and in vitro evaluation of a novel series of multitarget-directed ligands (MTDL) displaying both nanomolar dual-binding site (DBS) acetylcholinesterase inhibitory effects and partial 5-HT4R agonist activity, among which donecopride was selected for further in vivo evaluations in mice. The latter displayed procognitive and antiamnesic effects and enhanced sAPPα release, accounting for a potential symptomatic and disease-modifying therapeutic benefit in the treatment of Alzheimer's disease.
Optimum Multisensor, Multitarget Localization and Tracking.
1983-06-07
parameter vector t is given by (see Equation (3.5.1-7)’ the simul- taneous solution of A(e) N B G --1 ae &j’ -4n-in (fk’ ijn k3jn ~ k )kjn kjn - knn =1 k...the coefficient of mutual dependence given by M = 12 -(K-2) 121M12 :(3l 11J12 ) (K-2 and Jij is given by (see Equation (6.4.1-2)) - - (_ I R knN kn K...Transactions on Acoustic, Speech and Signal Processing, Vol ASSP-29, No. 3, June 1981. 17. B. Friedlander, "An ARMA Modeling Approach to Multitarget Tracking
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
WEBGIS based CropWatch online agriculture monitoring system
NASA Astrophysics Data System (ADS)
Zhang, X.; Wu, B.; Zeng, H.; Zhang, M.; Yan, N.
2015-12-01
CropWatch, which was developed by the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), has achieved breakthrough results in the integration of methods, independence of the assessments and support to emergency response by periodically releasing global agricultural information. Taking advantages of the multi-source remote sensing data and the openness of the data sharing policies, CropWatch group reported their monitoring results by publishing four bulletins one year. In order to better analysis and generate the bulletin and provide an alternative way to access agricultural monitoring indicators and results in CropWatch, The CropWatch online system based on the WEBGIS techniques has been developed. Figure 1 shows the CropWatch online system structure and the system UI in Clustering mode. Data visualization is sorted into three different modes: Vector mode, Raster mode and Clustering mode. Vector mode provides the statistic value for all the indicators over each monitoring units which allows users to compare current situation with historical values (average, maximum, etc.). Users can compare the profiles of each indicator over the current growing season with the historical data in a chart by selecting the region of interest (ROI). Raster mode provides pixel based anomaly of CropWatch indicators globally. In this mode, users are able to zoom in to the regions where the notable anomaly was identified from statistic values in vector mode. Data from remote sensing image series at high temporal and low spatial resolution provide key information in agriculture monitoring. Clustering mode provides integrated information on different classes in maps, the corresponding profiles for each class and the percentage of area of each class to the total area of all classes. The time series data is categorized into limited types by the ISODATA algorithm. For each clustering type, pixels on the map, profiles, and percentage legend are all linked together. All the three visualization methods are applied to four scales including 65 monitoring and reporting units (MRUs), 7 major production zones (MPZs), 173 countries and sub-countries for 9 large countries. Agro-Climatic information, Agronomic information and indicators related with crop area, crop yield and crop production are provided.
Shemer, Avner; Levy, Hanna; Sadick, Neil S; Harth, Yoram; Dorizas, Andrew S
2014-11-01
In the last decade, energy-based aesthetic treatments, using light, radiofrequency (RF), and ultrasound, have gained scientific acceptance as safe and efficacious for non-invasive treatment for aesthetic skin disorders. The phase-controlled multisource radiofrequency technology (3DEEP™), which is based on the simultaneous use of multiple RF generators, was proven to allow significant pigment-independent dermal heating without pain or the need of epidermal cooling. This study was performed in order to evaluate the efficacy and safety of a new handheld device delivering multisource radiofrequency to the skin for wrinkle reduction and skin tightening in the home setting. A total of 69 participants (age 54.3 years ± 8.09; age range 37-72 years) were enrolled in the study after meeting all inclusion/exclusion criteria (100%) and providing informed consent. Participants were provided with the tested device together with a user manual and treatment diary, to perform independent treatments at home for 4 weeks. The tested device, (Newa™, EndyMed Medical, Cesarea, Israel) emits 12 W of 1Mhz, RF energy through six electrodes arranged in a linear fashion. Independent control of RF polarity through each one of the 6 electrodes allows significant reduction of energy flow through the epidermis with increased dermal penetration. Participants were instructed to perform at least 5 treatments a week, for one month. Four follow-up visits were scheduled (once a week) during the period of independent treatments at home, following 4 weeks of home treatments, 1 month follow-up visit (1 month after treatment end) and at 3 months follow-up (3 months following treatment end). Analysis of pre-and post treatment images was conducted by three uninvolved physicians experienced with the Fitzpatrick Wrinkle and Elastosis Scale. Fitzpatrick Wrinkle and Elastosis score of each time point (4 weeks following home use treatments; 1 month follow-up, 3 months follow-up) was compared to baseline. Participants were asked a series of questions designed to explore usability concerns and level of satisfaction regarding the device use and subjective efficacy. Altogether, 62 subjects completed the study course and follow-up visits. No unexpected adverse effects were detected or reported throughout the independent treatment. All study participants did not experience any difficulties while operating the tested device for independent wrinkle reduction treatments. Photographic analysis of pre- and post-one month of independent home use treatments, and one and three months follow-up after end of treatment course, was conducted by three uninvolved board certified dermatologists. Analysis of results revealed improvement (downgrade of at least 1 score according to the Fitzpatrick scale) in 91.93%, 96.77%, and 98.39% of study subjects (according to the first, second, and third reviewer, respectively). Results were found to be statistically significant. The majority of study participants were very satisfied from the results of the independent treatment using the tested device for wrinkle reduction.
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
Shi, Z; Ma, X H; Qin, C; Jia, J; Jiang, Y Y; Tan, C Y; Chen, Y Z
2012-02-01
Selective multi-target serotonin reuptake inhibitors enhance antidepressant efficacy. Their discovery can be facilitated by multiple methods, including in silico ones. In this study, we developed and tested an in silico method, combinatorial support vector machines (COMBI-SVMs), for virtual screening (VS) multi-target serotonin reuptake inhibitors of seven target pairs (serotonin transporter paired with noradrenaline transporter, H(3) receptor, 5-HT(1A) receptor, 5-HT(1B) receptor, 5-HT(2C) receptor, melanocortin 4 receptor and neurokinin 1 receptor respectively) from large compound libraries. COMBI-SVMs trained with 917-1951 individual target inhibitors correctly identified 22-83.3% (majority >31.1%) of the 6-216 dual inhibitors collected from literature as independent testing sets. COMBI-SVMs showed moderate to good target selectivity in misclassifying as dual inhibitors 2.2-29.8% (majority <15.4%) of the individual target inhibitors of the same target pair and 0.58-7.1% of the other 6 targets outside the target pair. COMBI-SVMs showed low dual inhibitor false hit rates (0.006-0.056%, 0.042-0.21%, 0.2-4%) in screening 17 million PubChem compounds, 168,000 MDDR compounds, and 7-8181 MDDR compounds similar to the dual inhibitors. Compared with similarity searching, k-NN and PNN methods, COMBI-SVM produced comparable dual inhibitor yields, similar target selectivity, and lower false hit rate in screening 168,000 MDDR compounds. The annotated classes of many COMBI-SVMs identified MDDR virtual hits correlate with the reported effects of their predicted targets. COMBI-SVM is potentially useful for searching selective multi-target agents without explicit knowledge of these agents. Copyright © 2011 Elsevier Inc. All rights reserved.
Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W
2016-01-01
A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
The MiPACQ Clinical Question Answering System
Cairns, Brian L.; Nielsen, Rodney D.; Masanz, James J.; Martin, James H.; Palmer, Martha S.; Ward, Wayne H.; Savova, Guergana K.
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system’s architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation. PMID:22195068
The MiPACQ clinical question answering system.
Cairns, Brian L; Nielsen, Rodney D; Masanz, James J; Martin, James H; Palmer, Martha S; Ward, Wayne H; Savova, Guergana K
2011-01-01
The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system's architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation.
A novel image encryption scheme based on Kepler’s third law and random Hadamard transform
NASA Astrophysics Data System (ADS)
Luo, Yu-Ling; Zhou, Rong-Long; Liu, Jun-Xiu; Qiu, Sen-Hui; Cao, Yi
2017-12-01
Not Available Project supported by the National Natural Science Foundation of China (Grant Nos. 61661008 and 61603104), the Natural Science Foundation of Guangxi Zhuang Autonomous Region, China (Grant Nos. 2015GXNSFBA139256 and 2016GXNSFCA380017), the Funding of Overseas 100 Talents Program of Guangxi Provincial Higher Education, China, the Research Project of Guangxi University of China (Grant No. KY2016YB059), the Guangxi Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS15-07), the Doctoral Research Foundation of Guangxi Normal University, the Guangxi Provincial Experiment Center of Information Science, and the Innovation Project of Guangxi Graduate Education (Grant No. YCSZ2017055).
Parallel consensual neural networks.
Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H
1997-01-01
A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.
Chen, Yaqi; Chen, Zhui; Wang, Yi
2015-01-01
Screening and identifying active compounds from traditional Chinese medicine (TCM) and other natural products plays an important role in drug discovery. Here, we describe a magnetic beads-based multi-target affinity selection-mass spectrometry approach for screening bioactive compounds from natural products. Key steps and parameters including activation of magnetic beads, enzyme/protein immobilization, characterization of functional magnetic beads, screening and identifying active compounds from a complex mixture by LC/MS, are illustrated. The proposed approach is rapid and efficient in screening and identification of bioactive compounds from complex natural products.
Data association approaches in bearings-only multi-target tracking
NASA Astrophysics Data System (ADS)
Xu, Benlian; Wang, Zhiquan
2008-03-01
According to requirements of time computation complexity and correctness of data association of the multi-target tracking, two algorithms are suggested in this paper. The proposed Algorithm 1 is developed from the modified version of dual Simplex method, and it has the advantage of direct and explicit form of the optimal solution. The Algorithm 2 is based on the idea of Algorithm 1 and rotational sort method, it combines not only advantages of Algorithm 1, but also reduces the computational burden, whose complexity is only 1/ N times that of Algorithm 1. Finally, numerical analyses are carried out to evaluate the performance of the two data association algorithms.
Multi-target drugs: the trend of drug research and development.
Lu, Jin-Jian; Pan, Wei; Hu, Yuan-Jia; Wang, Yi-Tao
2012-01-01
Summarizing the status of drugs in the market and examining the trend of drug research and development is important in drug discovery. In this study, we compared the drug targets and the market sales of the new molecular entities approved by the U.S. Food and Drug Administration from January 2000 to December 2009. Two networks, namely, the target-target and drug-drug networks, have been set up using the network analysis tools. The multi-target drugs have much more potential, as shown by the network visualization and the market trends. We discussed the possible reasons and proposed the rational strategies for drug research and development in the future.
Zhang, Lin; Shan, Yuanyuan; Ji, Xingyue; Zhu, Mengyuan; Li, Chuansheng; Sun, Ying; Si, Ru; Pan, Xiaoyan; Wang, Jinfeng; Ma, Weina; Dai, Bingling; Wang, Binghe; Zhang, Jie
2017-01-01
Receptor tyrosine kinases (RTKs), especially VEGFR-2, TIE-2, and EphB4, play a crucial role in both angiogenesis and tumorigenesis. Moreover, complexity and heterogeneity of angiogenesis make it difficult to treat such pathological traits with single-target agents. Herein, we developed two classes of multi-target RTK inhibitors (RTKIs) based on the highly conserved ATP-binding pocket of VEGFR-2/TIE-2/EphB4, using previously reported BPS-7 as a lead compound. These multi-target RTKIs exhibited considerable potential as novel anti-angiogenic and anticancer agents. Among them, QDAU5 displayed the most promising potency and selectivity. It significantly suppressed viability of EA.hy926 and proliferation of several cancer cells. Further investigations indicated that QDAU5 showed high affinity to VEGFR-2 and reduced the phosphorylation of VEGFR-2. We identified QDAU5 as a potent multiple RTKs inhibitor exhibiting prominent anti-angiogenic and anticancer potency both in vitro and in vivo. Moreover, quinazolin-4(3H)-one has been identified as an excellent hinge binding moiety for multi-target inhibitors of angiogenic VEGFR-2, Tie-2, and EphB4. PMID:29285210
School adjustment of children in residential care: a multi-source analysis.
Martín, Eduardo; Muñoz de Bustillo, María del Carmen
2009-11-01
School adjustment is one the greatest challenges in residential child care programs. This study has two aims: to analyze school adjustment compared to a normative population, and to carry out a multi-source analysis (child, classmates, and teacher) of this adjustment. A total of 50 classrooms containing 60 children from residential care units were studied. The "Método de asignación de atributos perceptivos" (Allocation of perceptive attributes; Díaz-Aguado, 2006), the "Test Autoevaluativo Multifactorial de Adaptación Infantil" (TAMAI [Multifactor Self-assessment Test of Child Adjustment]; Hernández, 1996) and the "Protocolo de valoración para el profesorado (Evaluation Protocol for Teachers; Fernández del Valle, 1998) were applied. The main results indicate that, compared with their classmates, children in residential care are perceived as more controversial and less integrated at school, although no differences were observed in problems of isolation. The multi-source analysis shows that there is agreement among the different sources when the externalized and visible aspects are evaluated. These results are discussed in connection with the practices that are being developed in residential child care programs.
Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data
NASA Astrophysics Data System (ADS)
Han, X.; Wu, J.
2018-04-01
The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.
Multi-source remotely sensed data fusion for improving land cover classification
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Bo; Xu, Bing
2017-02-01
Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.
Binaural segregation in multisource reverberant environments.
Roman, Nicoleta; Srinivasan, Soundararajan; Wang, DeLiang
2006-12-01
In a natural environment, speech signals are degraded by both reverberation and concurrent noise sources. While human listening is robust under these conditions using only two ears, current two-microphone algorithms perform poorly. The psychological process of figure-ground segregation suggests that the target signal is perceived as a foreground while the remaining stimuli are perceived as a background. Accordingly, the goal is to estimate an ideal time-frequency (T-F) binary mask, which selects the target if it is stronger than the interference in a local T-F unit. In this paper, a binaural segregation system that extracts the reverberant target signal from multisource reverberant mixtures by utilizing only the location information of target source is proposed. The proposed system combines target cancellation through adaptive filtering and a binary decision rule to estimate the ideal T-F binary mask. The main observation in this work is that the target attenuation in a T-F unit resulting from adaptive filtering is correlated with the relative strength of target to mixture. A comprehensive evaluation shows that the proposed system results in large SNR gains. In addition, comparisons using SNR as well as automatic speech recognition measures show that this system outperforms standard two-microphone beamforming approaches and a recent binaural processor.
The application of the geography census data in seismic hazard assessment
NASA Astrophysics Data System (ADS)
Yuan, Shen; Ying, Zhang
2017-04-01
Limited by basic data timeliness to earthquake emergency database in Sichuan province, after the earthquake disaster assessment results and the actual damage there is a certain gap. In 2015, Sichuan completed the province census for the first time which including topography, traffic, vegetation coverage, water area, desert and bare ground, traffic network, the census residents and facilities, geographical unit, geological hazard as well as the Lushan earthquake-stricken area's town planning construction and ecological environment restoration. On this basis, combining with the existing achievements of basic geographic information data and high resolution image data, supplemented by remote sensing image interpretation and geological survey, Carried out distribution and change situation of statistical analysis and information extraction for earthquake disaster hazard-affected body elements such as surface coverage, roads, structures infrastructure in Lushan county before 2013 after 2015. At the same time, achieved the transformation and updating from geographical conditions census data to earthquake emergency basic data through research their data type, structure and relationship. Finally, based on multi-source disaster information including hazard-affected body changed data and Lushan 7.0 magnitude earthquake CORS network coseismal displacement field, etc. obtaining intensity control points through information fusion. Then completed the seismic influence field correction and assessed earthquake disaster again through Sichuan earthquake relief headquarters technology platform. Compared the new assessment result,original assessment result and actual earthquake disaster loss which shows that the revised evaluation result is more close to the actual earthquake disaster loss. In the future can realize geographical conditions census data to earthquake emergency basic data's normalized updates, ensure the timeliness to earthquake emergency database meanwhile improve the accuracy of assessment of earthquake disaster constantly.
Multisource passive acoustic tracking: an application of random finite set data fusion
NASA Astrophysics Data System (ADS)
Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung
2010-04-01
Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.
A computer vision system for the recognition of trees in aerial photographs
NASA Technical Reports Server (NTRS)
Pinz, Axel J.
1991-01-01
Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solaimani, Mohiuddin; Iftekhar, Mohammed; Khan, Latifur
Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. Asmore » a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.« less
NASA Astrophysics Data System (ADS)
Li, Deying; Yin, Kunlong; Gao, Huaxi; Liu, Changchun
2009-10-01
Although the project of the Three Gorges Dam across the Yangtze River in China can utilize this huge potential source of hydroelectric power, and eliminate the loss of life and damage by flood, it also causes environmental problems due to the big rise and fluctuation of the water, such as geo-hazards. In order to prevent and predict geo-hazards, the establishment of prediction system of geo-hazards is very necessary. In order to implement functions of hazard prediction of regional and urban geo-hazard, single geo-hazard prediction, prediction of landslide surge and risk evaluation, logical layers of the system consist of data capturing layer, data manipulation and processing layer, analysis and application layer, and information publication layer. Due to the existence of multi-source spatial data, the research on the multi-source transformation and fusion data should be carried on in the paper. Its applicability of the system was testified on the spatial prediction of landslide hazard through spatial analysis of GIS in which information value method have been applied aims to identify susceptible areas that are possible to future landslide, on the basis of historical record of past landslide, terrain parameter, geology, rainfall and anthropogenic activity. Detailed discussion was carried out on spatial distribution characteristics of landslide hazard in the new town of Badong. These results can be used for risk evaluation. The system can be implemented as an early-warning and emergency management tool by the relevant authorities of the Three Gorges Reservoir in the future.
Ye, Hongqiang; Ma, Qijun; Hou, Yuezhong; Li, Man; Zhou, Yongsheng
2017-12-01
Digital techniques are not clinically applied for 1-piece maxillary prostheses containing an obturator and removable partial denture retained by the remaining teeth because of the difficulty in obtaining sufficiently accurate 3-dimensional (3D) images. The purpose of this pilot clinical study was to generate 3D digital casts of maxillary defects, including the defective region and the maxillary dentition, based on multisource data registration and to evaluate their effectiveness. Twelve participants with maxillary defects were selected. The maxillofacial region was scanned with spiral computer tomography (CT), and the maxillary arch and palate were scanned using an intraoral optical scanner. The 3D images from the CT and intraoral scanner were registered and merged to form a 3D digital cast of the maxillary defect containing the anatomic structures needed for the maxillary prosthesis. This included the defect cavity, maxillary dentition, and palate. Traditional silicone impressions were also made, and stone casts were poured. The accuracy of the digital cast in comparison with that of the stone cast was evaluated by measuring the distance between 4 anatomic landmarks. Differences and consistencies were assessed using paired Student t tests and the intraclass correlation coefficient (ICC). In 3 participants, physical resin casts were produced by rapid prototyping from digital casts. Based on the resin casts, maxillary prostheses were fabricated by using conventional methods and then evaluated in the participants to assess the clinical applicability of the digital casts. Digital casts of the maxillary defects were generated and contained all the anatomic details needed for the maxillary prosthesis. Comparing the digital and stone casts, a paired Student t test indicated that differences in the linear distances between landmarks were not statistically significant (P>.05). High ICC values (0.977 to 0.998) for the interlandmark distances further indicated the high degree of consistency between the digital and stone casts. The maxillary prostheses showed good clinical effectiveness, indicating that the corresponding digital casts met the requirements for clinical application. Based on multisource data from spiral CT and the intraoral scanner, 3D digital casts of maxillary defects were generated using the registration technique. These casts were consistent with conventional stone casts in terms of accuracy and were suitable for clinical use. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Pradhan, Biswajeet; Chaudhari, Amruta; Adinarayana, J; Buchroithner, Manfred F
2012-01-01
In this paper, an attempt has been made to assess, prognosis and observe dynamism of soil erosion by universal soil loss equation (USLE) method at Penang Island, Malaysia. Multi-source (map-, space- and ground-based) datasets were used to obtain both static and dynamic factors of USLE, and an integrated analysis was carried out in raster format of GIS. A landslide location map was generated on the basis of image elements interpretation from aerial photos, satellite data and field observations and was used to validate soil erosion intensity in the study area. Further, a statistical-based frequency ratio analysis was carried out in the study area for correlation purposes. The results of the statistical correlation showed a satisfactory agreement between the prepared USLE-based soil erosion map and landslide events/locations, and are directly proportional to each other. Prognosis analysis on soil erosion helps the user agencies/decision makers to design proper conservation planning program to reduce soil erosion. Temporal statistics on soil erosion in these dynamic and rapid developments in Penang Island indicate the co-existence and balance of ecosystem.
A high prevalence of abnormal personality traits in chronic users of anabolic-androgenic steroids.
Cooper, C J; Noakes, T D; Dunne, T; Lambert, M I; Rochford, K
1996-01-01
OBJECTIVE: (1) To assess the personality profiles of the anabolic androgenic steroid users (AAS) and (2) to determine whether valid premorbid personality traits could be obtained from cross sectional assessment using multisource data. METHODS: The first author became a participant-observer in a group of body builders. An experimental group of body builders who had been using AAS for no more than 18 months (n = 12) was identified. A group of control subjects, each of whom claimed that he did not, and never had, used AAS (n = 12) was also recruited during this period. Key informants played a crucial role in recruiting subjects representative of the AAS and body building communities. An interview schedule based on the Diagnostic and statistical manual of mental disorders (DSM3-R) personality disorder criteria was conducted with each subject. Additional data were obtained from an AAS using informant and significant others including family and friends. RESULTS: The user group was significantly heavier than the control group and showed abnormal personality traits, in contrast to the control group. Personality traits of AAS users before the onset of AAS use, assessed retrospectively, were not different from personality traits of control subjects. There were significant differences between the before and after personality traits in AAS user group. CONCLUSIONS: The results suggest (1) that AAS use is associated with significant disturbances in personality profile, and (2) that these personality disturbances are possibly the direct result of AAS use. PMID:8889121
A high prevalence of abnormal personality traits in chronic users of anabolic-androgenic steroids.
Cooper, C J; Noakes, T D; Dunne, T; Lambert, M I; Rochford, K
1996-09-01
(1) To assess the personality profiles of the anabolic androgenic steroid users (AAS) and (2) to determine whether valid premorbid personality traits could be obtained from cross sectional assessment using multisource data. The first author became a participant-observer in a group of body builders. An experimental group of body builders who had been using AAS for no more than 18 months (n = 12) was identified. A group of control subjects, each of whom claimed that he did not, and never had, used AAS (n = 12) was also recruited during this period. Key informants played a crucial role in recruiting subjects representative of the AAS and body building communities. An interview schedule based on the Diagnostic and statistical manual of mental disorders (DSM3-R) personality disorder criteria was conducted with each subject. Additional data were obtained from an AAS using informant and significant others including family and friends. The user group was significantly heavier than the control group and showed abnormal personality traits, in contrast to the control group. Personality traits of AAS users before the onset of AAS use, assessed retrospectively, were not different from personality traits of control subjects. There were significant differences between the before and after personality traits in AAS user group. The results suggest (1) that AAS use is associated with significant disturbances in personality profile, and (2) that these personality disturbances are possibly the direct result of AAS use.
Evaluating the potential of improving residential water balance at building scale.
Agudelo-Vera, Claudia M; Keesman, Karel J; Mels, Adriaan R; Rijnaarts, Huub H M
2013-12-15
Earlier results indicated that, for an average household, self-sufficiency in water supply can be achieved by following the Urban harvest Approach (UHA), in a combination of demand minimization, cascading and multi-sourcing. To achieve these results, it was assumed that all available local resources can be harvested. In reality, however, temporal, spatial and location-bound factors pose limitations to this harvest and, thus, to self-sufficiency. This article investigates potential spatial and temporal limitations to harvest local water resources at building level for the Netherlands, with a focus on indoor demand. Two building types were studied, a free standing house (one four-people household) and a mid-rise apartment flat (28 two-person households). To be able to model yearly water balances, daily patterns considering household occupancy and presence of water using appliances were defined per building type. Three strategies were defined. The strategies include demand minimization, light grey water (LGW) recycling, and rainwater harvesting (multi-sourcing). Recycling and multi-sourcing cater for toilet flushing and laundry machine. Results showed that water saving devices may reduce 30% of the conventional demand. Recycling of LGW can supply 100% of second quality water (DQ2) which represents 36% of the conventional demand or up to 20% of the minimized demand. Rainwater harvesting may supply approximately 80% of the minimized demand in case of the apartment flat and 60% in case of the free standing house. To harvest these potentials, different system specifications, related to the household type, are required. Two constraints to recycle and multi-source were identified, namely i) limitations in the grey water production and available rainfall; and ii) the potential to harvest water as determined by the temporal pattern in water availability, water use, and storage and treatment capacities. Copyright © 2013 Elsevier Ltd. All rights reserved.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
Targeting microbial biofilms: current and prospective therapeutic strategies
Koo, Hyun; Allan, Raymond N; Howlin, Robert P; Hall-Stoodley, Luanne; Stoodley, Paul
2017-01-01
Biofilm formation is a key virulence factor for a wide range of microorganisms that cause chronic infections. The multifactorial nature of biofilm development and drug tolerance imposes great challenges for the use of conventional antimicrobials, and indicates the need for multi-targeted or combinatorial therapies. In this review, we focus on current therapeutic strategies and those that are under development that target vital structural and functional traits of microbial biofilms and drug tolerance mechanisms, including the extracellular matrix and dormant cells. We emphasize strategies that are supported by in vivo or ex vivo studies, highlight emerging biofilm-targeting technologies, and provide a rationale for multi-targeted therapies that are aimed at disrupting the complex biofilm microenvironment. PMID:28944770
Multitarget mixture reduction algorithm with incorporated target existence recursions
NASA Astrophysics Data System (ADS)
Ristic, Branko; Arulampalam, Sanjeev
2000-07-01
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
Global Ocean Evaporation Increases Since 1960 in Climate Reanalyses: How Accurate Are They?
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Roberts, Jason B.; Bosilovich, Michael G.
2016-01-01
AGCMs w/ Specified SSTs (AMIPs) GEOS-5, ERA-20CM Ensembles Incorporate best historical estimates of SST, sea ice, radiative forcing Atmospheric "weather noise" is inconsistent with specified SST. Instantaneous Sfc fluxes can be wrong sign (e.g. Indian Ocean Monsoon, high latitude oceans). Averaging over ensemble members helps isolate SST-forced signal. Reduced Observational Reanalyses: NOAA 20CR V2C, ERA-20C, JRA-55C Incorporate observed Sfc Press (20CR), Marine Winds (ERA-20C) and rawinsondes (JRA-55C) to recover much of true synoptic or weather w/o shock of new sat obs. Comprehensive Reanalyses (MERRA-2) Full suite of observational constraints- both conventional and remote sensing. But... substantial uncertainties owing to evolving satellite observing system. Multi-source Statistically Blended OAFlux, LargeYeager Blend reanalysis, satellite, and ocean buoy information. While climatological biases are removed, non-physical trends or variations in components remain. Satellite Retrievals GSSTF3, SeaFlux, HOAPS3... Global coverage. Retrieved near sfc wind speed, & humidity used with SST to drive accurate bulk aerodynamic flux estimates. Satellite inter-calibration, spacecraft pointing variations crucial. Short record ( late 1987-present). In situ Measurements ICOADS, IVAD, Res Cruises VOS and buoys offer direct measurements. Sparse data coverage (esp south of 30S. Changes in measurement techniques (e.g. shipboard anemometer height).
Ontology driven integration platform for clinical and translational research
Mirhaji, Parsa; Zhu, Min; Vagnoni, Mattew; Bernstam, Elmer V; Zhang, Jiajie; Smith, Jack W
2009-01-01
Semantic Web technologies offer a promising framework for integration of disparate biomedical data. In this paper we present the semantic information integration platform under development at the Center for Clinical and Translational Sciences (CCTS) at the University of Texas Health Science Center at Houston (UTHSC-H) as part of our Clinical and Translational Science Award (CTSA) program. We utilize the Semantic Web technologies not only for integrating, repurposing and classification of multi-source clinical data, but also to construct a distributed environment for information sharing, and collaboration online. Service Oriented Architecture (SOA) is used to modularize and distribute reusable services in a dynamic and distributed environment. Components of the semantic solution and its overall architecture are described. PMID:19208190
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Content-based image exploitation for situational awareness
NASA Astrophysics Data System (ADS)
Gains, David
2008-04-01
Image exploitation is of increasing importance to the enterprise of building situational awareness from multi-source data. It involves image acquisition, identification of objects of interest in imagery, storage, search and retrieval of imagery, and the distribution of imagery over possibly bandwidth limited networks. This paper describes an image exploitation application that uses image content alone to detect objects of interest, and that automatically establishes and preserves spatial and temporal relationships between images, cameras and objects. The application features an intuitive user interface that exposes all images and information generated by the system to an operator thus facilitating the formation of situational awareness.
Introduction to Remote Sensing Image Registration
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline
2017-01-01
For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications
NASA Astrophysics Data System (ADS)
Gameiro, Isabel; Michalska, Patrycja; Tenti, Giammarco; Cores, Ángel; Buendia, Izaskun; Rojo, Ana I.; Georgakopoulos, Nikolaos D.; Hernández-Guijo, Jesús M.; Teresa Ramos, María; Wells, Geoffrey; López, Manuela G.; Cuadrado, Antonio; Menéndez, J. Carlos; León, Rafael
2017-03-01
The formation of neurofibrillary tangles (NFTs), oxidative stress and neuroinflammation have emerged as key targets for the treatment of Alzheimer’s disease (AD), the most prevalent neurodegenerative disorder. These pathological hallmarks are closely related to the over-activity of the enzyme GSK3β and the downregulation of the defense pathway Nrf2-EpRE observed in AD patients. Herein, we report the synthesis and pharmacological evaluation of a new family of multitarget 2,4-dihydropyrano[2,3-c]pyrazoles as dual GSK3β inhibitors and Nrf2 inducers. These compounds are able to inhibit GSK3β and induce the Nrf2 phase II antioxidant and anti-inflammatory pathway at micromolar concentrations, showing interesting structure-activity relationships. The association of both activities has resulted in a remarkable anti-inflammatory ability with an interesting neuroprotective profile on in vitro models of neuronal death induced by oxidative stress and energy depletion and AD. Furthermore, none of the compounds exhibited in vitro neurotoxicity or hepatotoxicity and hence they had improved safety profiles compared to the known electrophilic Nrf2 inducers. In conclusion, the combination of both activities in this family of multitarget compounds confers them a notable interest for the development of lead compounds for the treatment of AD.
Gameiro, Isabel; Michalska, Patrycja; Tenti, Giammarco; Cores, Ángel; Buendia, Izaskun; Rojo, Ana I.; Georgakopoulos, Nikolaos D.; Hernández-Guijo, Jesús M.; Teresa Ramos, María; Wells, Geoffrey; López, Manuela G.; Cuadrado, Antonio; Menéndez, J. Carlos; León, Rafael
2017-01-01
The formation of neurofibrillary tangles (NFTs), oxidative stress and neuroinflammation have emerged as key targets for the treatment of Alzheimer’s disease (AD), the most prevalent neurodegenerative disorder. These pathological hallmarks are closely related to the over-activity of the enzyme GSK3β and the downregulation of the defense pathway Nrf2-EpRE observed in AD patients. Herein, we report the synthesis and pharmacological evaluation of a new family of multitarget 2,4-dihydropyrano[2,3-c]pyrazoles as dual GSK3β inhibitors and Nrf2 inducers. These compounds are able to inhibit GSK3β and induce the Nrf2 phase II antioxidant and anti-inflammatory pathway at micromolar concentrations, showing interesting structure-activity relationships. The association of both activities has resulted in a remarkable anti-inflammatory ability with an interesting neuroprotective profile on in vitro models of neuronal death induced by oxidative stress and energy depletion and AD. Furthermore, none of the compounds exhibited in vitro neurotoxicity or hepatotoxicity and hence they had improved safety profiles compared to the known electrophilic Nrf2 inducers. In conclusion, the combination of both activities in this family of multitarget compounds confers them a notable interest for the development of lead compounds for the treatment of AD. PMID:28361919
Romero Durán, Francisco J.; Alonso, Nerea; Caamaño, Olga; García-Mera, Xerardo; Yañez, Matilde; Prado-Prado, Francisco J.; González-Díaz, Humberto
2014-01-01
In a multi-target complex network, the links (Lij) represent the interactions between the drug (di) and the target (tj), characterized by different experimental measures (Ki, Km, IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (cj). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%–90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally. PMID:25255029
NASA Astrophysics Data System (ADS)
Rico, Antonio; Noguera, Manuel; Garrido, José Luis; Benghazi, Kawtar; Barjis, Joseph
2016-05-01
Multi-tenant architectures (MTAs) are considered a cornerstone in the success of Software as a Service as a new application distribution formula. Multi-tenancy allows multiple customers (i.e. tenants) to be consolidated into the same operational system. This way, tenants run and share the same application instance as well as costs, which are significantly reduced. Functional needs vary from one tenant to another; either companies from different sectors run different types of applications or, although deploying the same functionality, they do differ in the extent of their complexity. In any case, MTA leaves one major concern regarding the companies' data, their privacy and security, which requires special attention to the data layer. In this article, we propose an extended data model that enhances traditional MTAs in respect of this concern. This extension - called multi-target - allows MT applications to host, manage and serve multiple functionalities within the same multi-tenant (MT) environment. The practical deployment of this approach will allow SaaS vendors to target multiple markets or address different levels of functional complexity and yet commercialise just one single MT application. The applicability of the approach is demonstrated via a case study of a real multi-tenancy multi-target (MT2) implementation, called Globalgest.
Rusnati, Marco; Oreste, Pasqua; Zoppetti, Giorgio; Presta, Marco
2005-01-01
Heparin is a sulphated glycosaminoglycan currently used as an anticoagulant and antithrombotic drug. It consists largely of 2-O-sulphated IdoA not l&r arrow N, 6-O-disulphated GlcN disaccharide units. Other disaccharides containing unsulphated IdoA or GlcA and N-sulphated or N-acetylated GlcN are also present as minor components. This heterogeneity is more pronounced in heparan sulphate (HS), where the low-sulphated disaccharides are the most abundant. Heparin/HS bind to a variety of biologically active polypeptides, including enzymes, growth factors and cytokines, and viral proteins. This capacity can be exploited to design multi-target heparin/HS-derived drugs for pharmacological interventions in a variety of pathologic conditions besides coagulation and thrombosis, including neoplasia and viral infection. The capsular K5 polysaccharide from Escherichia coli has the same structure as the heparin precursor N-acetyl heparosan. The possibility of producing K5 polysaccharide derivatives by chemical and enzymatic modifications, thus generating heparin/HS-like compounds, has been demonstrated. These K5 polysaccharide derivatives are endowed with different biological properties, including anticoagulant/antithrombotic, antineoplastic, and anti-AIDS activities. Here, the literature data are discussed and the possible therapeutic implications for this novel class of multi-target "biotechnological heparin/HS" molecules are outlined.
Design and application of BIM based digital sand table for construction management
NASA Astrophysics Data System (ADS)
Fuquan, JI; Jianqiang, LI; Weijia, LIU
2018-05-01
This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.
NASA Astrophysics Data System (ADS)
Vieira, João; da Conceição Cunha, Maria
2017-04-01
A multi-objective decision model has been developed to identify the Pareto-optimal set of management alternatives for the conjunctive use of surface water and groundwater of a multisource urban water supply system. A multi-objective evolutionary algorithm, Borg MOEA, is used to solve the multi-objective decision model. The multiple solutions can be shown to stakeholders allowing them to choose their own solutions depending on their preferences. The multisource urban water supply system studied here is dependent on surface water and groundwater and located in the Algarve region, southernmost province of Portugal, with a typical warm Mediterranean climate. The rainfall is low, intermittent and concentrated in a short winter, followed by a long and dry period. A base population of 450 000 inhabitants and visits by more than 13 million tourists per year, mostly in summertime, turns water management critical and challenging. Previous studies on single objective optimization after aggregating multiple objectives together have already concluded that only an integrated and interannual water resources management perspective can be efficient for water resource allocation in this drought prone region. A simulation model of the multisource urban water supply system using mathematical functions to represent the water balance in the surface reservoirs, the groundwater flow in the aquifers, and the water transport in the distribution network with explicit representation of water quality is coupled with Borg MOEA. The multi-objective problem formulation includes five objectives. Two objective evaluate separately the water quantity and the water quality supplied for the urban use in a finite time horizon, one objective calculates the operating costs, and two objectives appraise the state of the two water sources - the storage in the surface reservoir and the piezometric levels in aquifer - at the end of the time horizon. The decision variables are the volume of withdrawals from each water source in each time step (i.e., reservoir diversion and groundwater pumping). The results provide valuable information for analysing the impacts of the conjunctive use of surface water and groundwater. For example, considering a drought scenario, the results show how the same level of total water supplied can be achieved by different management alternatives with different impact on the water quality, costs, and the state of the water sources at the end of the time horizon. The results allow also the clear understanding of the potential benefits from the conjunctive use of surface water and groundwater thorough the mitigation of the variation in the availability of surface water, improving the water quantity and/or water quality delivered to the users, or the better adaptation of such systems to a changing world.
Wu, Peng; Huang, Yiyin; Kang, Longtian; Wu, Maoxiang; Wang, Yaobing
2015-01-01
A series of palladium-based catalysts of metal alloying (Sn, Pb) and/or (N-doped) graphene support with regular enhanced electrocatalytic activity were investigated. The peak current density (118.05 mA cm−2) of PdSn/NG is higher than the sum current density (45.63 + 47.59 mA cm−2) of Pd/NG and PdSn/G. It reveals a synergistic electrocatalytic oxidation effect in PdSn/N-doped graphene Nanocomposite. Extend experiments show this multisource synergetic catalytic effect of metal alloying and N-doped graphene support in one catalyst on small organic molecule (methanol, ethanol and Ethylene glycol) oxidation is universal in PdM(M = Sn, Pb)/NG catalysts. Further, The high dispersion of small nanoparticles, the altered electron structure and Pd(0)/Pd(II) ratio of Pd in catalysts induced by strong coupled the metal alloying and N-doped graphene are responsible for the multisource synergistic catalytic effect in PdM(M = Sn, Pb) /NG catalysts. Finally, the catalytic durability and stability are also greatly improved. PMID:26434949
Silk, Kami J; Perrault, Evan K; Nazione, Samantha; Pace, Kristin; Hager, Polly; Springer, Steven
2013-12-01
The current study reports findings from evaluation research conducted to identify how online prostate cancer treatment decision-making information can be both improved and more effectively disseminated to those who need it most. A multi-method, multi-target approach was used and guided by McGuire's Communication Matrix Model. Focus groups (n = 31) with prostate cancer patients and their family members, and in-depth interviews with physicians (n = 8), helped inform a web survey (n = 89). Results indicated that physicians remain a key information source for medical advice and the Internet is a primary channel used to help make informed prostate cancer treatment decisions. Participants reported a need for more accessible information related to treatment options and treatment side effects. Additionally, physicians indicated that the best way for agencies to reach them with new information to deliver to patients is by contacting them directly and meeting with them one-on-one. Advice for organizations to improve their current prostate cancer web offerings and further ways to improve information dissemination are discussed.
Molecular targeted therapies for solid tumors: management of side effects.
Grünwald, Viktor; Soltau, Jens; Ivanyi, Philipp; Rentschler, Jochen; Reuter, Christoph; Drevs, Joachim
2009-03-01
This review will provide physicians and oncologists with an overview of side effects related to targeted agents that inhibit vascular endothelial growth factor (VEGF), epidermal growth factor (EGF) and mammalian target of rapamycin (mTOR) signaling in the treatment of solid tumors. Such targeted agents can be divided into monoclonal antibodies, tyrosine kinase inhibitors, multitargeted tyrosine kinase inhibitors and serine/threonine kinase inhibitors. Molecular targeted therapies are generally well tolerated, but inhibitory effects on the biological function of the targets in healthy tissue can result in specific treatment-related side effects, particularly with multitargeted agents. We offer some guidance on how to manage adverse events in cancer patients based on the range of options currently available. Copyright 2009 S. Karger AG, Basel.
Dual/multitargeted xanthone derivatives for Alzheimer's disease: where do we stand?
Cruz, Maria I; Cidade, Honorina; Pinto, Madalena
2017-09-01
To date, the current therapy for Alzheimer's disease (AD) based on acetylcholinesterase inhibitors is only symptomatic, being its efficacy limited. Hence, the recent research has been focused in the development of different pharmacological approaches. Here we discuss the potential of xanthone derivatives as new anti-Alzheimer agents. The interference of xanthone derivatives with acetylcholinesterase and other molecular targets and cellular mechanisms associated with AD have been recently systematically reported. Therefore, we report xanthones with anticholinesterase, monoamine oxidase and amyloid β aggregation inhibitory activities as well as antioxidant properties, emphasizing xanthone derivatives with dual/multitarget activity as potential agents to treat AD. We also propose the structural features for these activities that may guide the design of new, more effective xanthone derivatives. [Formula: see text].
Zhang, Jing-Jing; Muenzner, Julienne K; Abu El Maaty, Mohamed A; Karge, Bianka; Schobert, Rainer; Wölfl, Stefan; Ott, Ingo
2016-08-16
A rhodium(i) and a ruthenium(ii) complex with a caffeine derived N-heterocyclic carbene (NHC) ligand were biologically investigated as organometallic conjugates consisting of a metal center and a naturally occurring moiety. While the ruthenium(ii) complex was largely inactive, the rhodium(i) NHC complex displayed selective cytotoxicity and significant anti-metastatic and in vivo anti-vascular activities and acted as both a mammalian and an E. coli thioredoxin reductase inhibitor. In HCT-116 cells it increased the reactive oxygen species level, leading to DNA damage, and it induced cell cycle arrest, decreased the mitochondrial membrane potential, and triggered apoptosis. This rhodium(i) NHC derivative thus represents a multi-target compound with promising anti-cancer potential.
NASA Astrophysics Data System (ADS)
Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.
2016-06-01
This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and conditionally grown, fused and filtered morphologically. The output polygons are vectorized and reintegrated into the previously reconstructed buildings by sparsely ray-tracing their vertices. Finally the enhanced 3D models get stored as textured geometry for visualization and semantically annotated "LOD-2.5" CityGML objects for GIS applications.
SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K; Kim, D; Kim, T
2015-06-15
Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array whichmore » have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less
NASA Astrophysics Data System (ADS)
Guarnieri, A.; Masiero, A.; Piragnolo, M.; Pirotti, F.; Vettore, A.
2016-06-01
In this paper we present the results of the development of a Web-based archiving and documenting system aimed to the management of multisource and multitemporal data related to cultural heritage. As case study we selected the building complex of Villa Revedin Bolasco in Castefranco Veneto (Treviso, Italy) and its park. Buildings and park were built in XIX century after several restorations of the original XIV century area. The data management system relies on a geodatabase framework, in which different kinds of datasets were stored. More specifically, the geodatabase elements consist of historical information, documents, descriptions of artistic characteristics of the building and the park, in the form of text and images. In addition, we used also floorplans, sections and views of the outer facades of the building extracted by a TLS-based 3D model of the whole Villa. In order to manage and explore these rich dataset, we developed a geodatabase using PostgreSQL and PostGIS as spatial plugin. The Web-GIS platform, based on HTML5 and PHP programming languages, implements the NASA Web World Wind virtual globe, a 3D virtual globe we used to enable the navigation and interactive exploration of the park. Furthermore, through a specific timeline function, the user can explore the historical evolution of the building complex.
Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.
2016-01-01
Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L
2011-01-01
Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less
Quinazoline derivatives as potential anticancer agents: a patent review (2007 - 2010).
Marzaro, Giovanni; Guiotto, Adriano; Chilin, Adriana
2012-03-01
Due to the increase in knowledge about cancer pathways, there is a growing interest in finding novel potential drugs. Quinazoline is one of the most widespread scaffolds amongst bioactive compounds. A number of patents and papers appear in the literature regarding the discovery and development of novel promising quinazoline compounds for cancer chemotherapy. Although there is a progressive decrease in the number of patents filed, there is an increasing number of biochemical targets for quinazoline compounds. This paper provides a comprehensive review of the quinazolines patented in 2007 - 2010 as potential anticancer agents. Information from articles published in international peer-reviewed journals was also included, to give a more exhaustive overview. From about 1995 to 2006, the anticancer quinazolines panorama has been dominated by the 4-anilinoquinazolines as tyrosine kinase inhibitors. The extensive researches conducted in this period could have caused the progressive reduction in the ability to file novel patents as shown in the 2007 - 2010 period. However, the growing knowledge of cancer-related pathways has recently highlighted some novel potential targets for therapy, with quinazolines receiving increasing attention. This is well demonstrated by the number of different targets of the patents considered in this review. The structural heterogeneity in the patented compounds makes it difficult to derive general pharmacophores and make comparisons among claimed compounds. On the other hand, the identification of multi-target compounds seems a reliable goal. Thus, it is reasonable that quinazoline compounds will be studied and developed for multi-target therapies.
Armour, Brianna L; Barnes, Steve R; Moen, Spencer O; Smith, Eric; Raymond, Amy C; Fairman, James W; Stewart, Lance J; Staker, Bart L; Begley, Darren W; Edwards, Thomas E; Lorimer, Donald D
2013-06-28
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year (1). Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans (2). Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains.
Retrieval of biophysical parameters with AVIRIS and ISM: The Landes Forest, south west France
NASA Technical Reports Server (NTRS)
Zagolski, F.; Gastellu-Etchegorry, J. P.; Mougin, E.; Giordano, G.; Marty, G.; Letoan, T.; Beaudoin, A.
1992-01-01
The first steps of an experiment for investigating the capability of airborne spectrometer data for retrieval of biophysical parameters of vegetation, especially water conditions are presented. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and ISM data were acquired in the frame of the 1991 NASA/JPL and CNES campaigns on the Landes, South west France, a large and flat forest area with mainly maritime pines. In-situ measurements were completed at that time; i.e. reflectance spectra, atmospheric profiles, sampling for further laboratory analyses of elements concentrations (lignin, water, cellulose, nitrogen,...). All information was integrated in an already existing data base (age, LAI, DBH, understory cover,...). A methodology was designed for (1) obtaining geometrically and atmospherically corrected reflectance data, (2) registering all available information, and (3) analyzing these multi-source informations. Our objective is to conduct comparative studies with simulation reflectance models, and to improve these models, especially in the MIR.
Vitale, Rosa Maria; Rispoli, Vincenzo; Desiderio, Doriana; Sgammato, Roberta; Thellung, Stefano; Canale, Claudio; Vassalli, Massimo; Carbone, Marianna; Ciavatta, Maria Letizia; Mollo, Ernesto; Felicità, Vera; Arcone, Rosaria; Gavagnin Capoggiani, Margherita; Masullo, Mariorosario; Florio, Tullio; Amodeo, Pietro
2018-03-07
Multitargeting or polypharmacological approaches, looking for single chemical entities retaining the ability to bind two or more molecular targets, are a potentially powerful strategy to fight complex, multifactorial pathologies. Unfortunately, the search for multiligand agents is challenging because only a small subset of molecules contained in molecular databases are bioactive and even fewer are active on a preselected set of multiple targets. However, collections of natural compounds feature a significantly higher fraction of bioactive molecules than synthetic ones. In this view, we searched our library of 1175 natural compounds from marine sources for molecules including a 2-aminoimidazole+aromatic group motif, found in known compounds active on single relevant targets for Alzheimer's disease (AD). This identified two molecules, a pseudozoanthoxanthin (1) and a bromo-pyrrole alkaloid (2), which were predicted by a computational approach to possess interesting multitarget profiles on AD target proteins. Biochemical assays experimentally confirmed their biological activities. The two compounds inhibit acetylcholinesterase, butyrylcholinesterase, and β-secretase enzymes in high- to sub-micromolar range. They are also able to prevent and revert β-amyloid (Aβ) aggregation of both Aβ 1-40 and Aβ 1-42 peptides, with 1 being more active than 2. Preliminary in vivo studies suggest that compound 1 is able to restore cholinergic cortico-hippocampal functional connectivity.
Romero Durán, Francisco J; Alonso, Nerea; Caamaño, Olga; García-Mera, Xerardo; Yañez, Matilde; Prado-Prado, Francisco J; González-Díaz, Humberto
2014-09-24
In a multi-target complex network, the links (L(ij)) represent the interactions between the drug (d(i)) and the target (t(j)), characterized by different experimental measures (K(i), K(m), IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (c(j)). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%-90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally.
Multi-Targeted Antithrombotic Therapy for Total Artificial Heart Device Patients.
Ramirez, Angeleah; Riley, Jeffrey B; Joyce, Lyle D
2016-03-01
To prevent thrombotic or bleeding events in patients receiving a total artificial heart (TAH), agents have been used to avoid adverse events. The purpose of this article is to outline the adoption and results of a multi-targeted antithrombotic clinical procedure guideline (CPG) for TAH patients. Based on literature review of TAH anticoagulation and multiple case series, a CPG was designed to prescribe the use of multiple pharmacological agents. Total blood loss, Thromboelastograph(®) (TEG), and platelet light-transmission aggregometry (LTA) measurements were conducted on 13 TAH patients during the first 2 weeks of support in our institution. Target values and actual medians for postimplant days 1, 3, 7, and 14 were calculated for kaolinheparinase TEG, kaolin TEG, LTA, and estimated blood loss. Protocol guidelines were followed and anticoagulation management reduced bleeding and prevented thrombus formation as well as thromboembolic events in TAH patients postimplantation. The patients in this study were susceptible to a variety of possible complications such as mechanical device issues, thrombotic events, infection, and bleeding. Among them all it was clear that patients were at most risk for bleeding, particularly on postoperative days 1 through 3. However, bleeding was reduced into postoperative days 3 and 7, indicating that acceptable hemostasis was achieved with the anticoagulation protocol. The multidisciplinary, multi-targeted anticoagulation clinical procedure guideline was successful to maintain adequate antithrombotic therapy for TAH patients.
Energy Harvesting Research: The Road from Single Source to Multisource.
Bai, Yang; Jantunen, Heli; Juuti, Jari
2018-06-07
Energy harvesting technology may be considered an ultimate solution to replace batteries and provide a long-term power supply for wireless sensor networks. Looking back into its research history, individual energy harvesters for the conversion of single energy sources into electricity are developed first, followed by hybrid counterparts designed for use with multiple energy sources. Very recently, the concept of a truly multisource energy harvester built from only a single piece of material as the energy conversion component is proposed. This review, from the aspect of materials and device configurations, explains in detail a wide scope to give an overview of energy harvesting research. It covers single-source devices including solar, thermal, kinetic and other types of energy harvesters, hybrid energy harvesting configurations for both single and multiple energy sources and single material, and multisource energy harvesters. It also includes the energy conversion principles of photovoltaic, electromagnetic, piezoelectric, triboelectric, electrostatic, electrostrictive, thermoelectric, pyroelectric, magnetostrictive, and dielectric devices. This is one of the most comprehensive reviews conducted to date, focusing on the entire energy harvesting research scene and providing a guide to seeking deeper and more specific research references and resources from every corner of the scientific community. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method
NASA Astrophysics Data System (ADS)
Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao
2016-09-01
To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
Lyness, Karen S; Judiesch, Michael K
2008-07-01
The present study was the first cross-national examination of whether managers who were perceived to be high in work-life balance were expected to be more or less likely to advance in their careers than were less balanced, more work-focused managers. Using self ratings, peer ratings, and supervisor ratings of 9,627 managers in 33 countries, the authors examined within-source and multisource relationships with multilevel analyses. The authors generally found that managers who were rated higher in work-life balance were rated higher in career advancement potential than were managers who were rated lower in work-life balance. However, national gender egalitarianism, measured with Project GLOBE scores, moderated relationships based on supervisor and self ratings, with stronger positive relationships in low egalitarian cultures. The authors also found 3-way interactions of work-life balance ratings, ratee gender, and gender egalitarianism in multisource analyses in which self balance ratings predicted supervisor and peer ratings of advancement potential. Work-life balance ratings were positively related to advancement potential ratings for women in high egalitarian cultures and men in low gender egalitarian cultures, but relationships were nonsignificant for men in high egalitarian cultures and women in low egalitarian cultures.
Targeted Therapy Shows Benefit in Rare Type of Thyroid Cancer
Treatment with the multitargeted agent vandetanib (Caprelsa) improved progression-free survival in patients with medullary thyroid cancer (MTC), according to findings from a randomized clinical trial.
heterogeneous mixture distributions for multi-source extreme rainfall
NASA Astrophysics Data System (ADS)
Ouarda, T.; Shin, J.; Lee, T. S.
2013-12-01
Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.
NASA Astrophysics Data System (ADS)
Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd
2018-01-01
The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.
Bautista-Aguilera, Óscar M; Hagenow, Stefanie; Palomino-Antolin, Alejandra; Farré-Alins, Víctor; Ismaili, Lhassane; Joffrin, Pierre-Louis; Jimeno, María L; Soukup, Ondřej; Janočková, Jana; Kalinowsky, Lena; Proschak, Ewgenij; Iriepa, Isabel; Moraleda, Ignacio; Schwed, Johannes S; Romero Martínez, Alejandro; López-Muñoz, Francisco; Chioua, Mourad; Egea, Javier; Ramsay, Rona R; Marco-Contelles, José; Stark, Holger
2017-10-02
The therapy of complex neurodegenerative diseases requires the development of multitarget-directed drugs (MTDs). Novel indole derivatives with inhibitory activity towards acetyl/butyrylcholinesterases and monoamine oxidases A/B as well as the histamine H 3 receptor (H3R) were obtained by optimization of the neuroprotectant ASS234 by incorporating generally accepted H3R pharmacophore motifs. These small-molecule hits demonstrated balanced activities at the targets, mostly in the nanomolar concentration range. Additional in vitro studies showed antioxidative neuroprotective effects as well as the ability to penetrate the blood-brain barrier. With this promising in vitro profile, contilisant (at 1 mg kg -1 i.p.) also significantly improved lipopolysaccharide-induced cognitive deficits. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Polypharmacology Shakes Hands with Complex Aetiopathology.
Brodie, James S; Di Marzo, Vincenzo; Guy, Geoffrey W
2015-12-01
Chronic diseases are due to deviations of fundamental physiological systems, with different pathologies being characterised by similar malfunctioning biological networks. The ensuing compensatory mechanisms may weaken the body's dynamic ability to respond to further insults and reduce the efficacy of conventional single target treatments. The multitarget, systemic, and prohomeostatic actions emerging for plant cannabinoids exemplify what might be needed for future medicines. Indeed, two combined cannabis extracts were approved as a single medicine (Sativex(®)), while pure cannabidiol, a multitarget cannabinoid, is emerging as a treatment for paediatric drug-resistant epilepsy. Using emerging cannabinoid medicines as an example, we revisit the concept of polypharmacology and describe a new empirical model, the 'therapeutic handshake', to predict efficacy/safety of compound combinations of either natural or synthetic origin. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gao, Fengxiang; Mahoney, Jennifer C; Daly, Elizabeth R; Lamothe, Wendy; Tullo, Daniel; Bean, Christine
2014-01-01
A multitarget real-time PCR assay with three targets, including insertion sequence 481 (IS481), IS1001, and an IS1001-like element, as well as pertussis toxin subunit S1 (ptxS1), for the detection of Bordetella species was evaluated during a pertussis outbreak. The sensitivity and specificity were 77 and 88% (PCR) and 66 and 100% (culture), respectively. All patients with an IS481 C(T) of <30 also tested positive by ptxS1 assay and were clinical pertussis cases. No patients with IS481 C(T) values of ≥40 tested positive by culture. Therefore, we recommend that culture be performed only for specimens with IS481 C(T) values of 30 ≤ CT <40.
Mahoney, Jennifer C.; Daly, Elizabeth R.; Lamothe, Wendy; Tullo, Daniel; Bean, Christine
2014-01-01
A multitarget real-time PCR assay with three targets, including insertion sequence 481 (IS481), IS1001, and an IS1001-like element, as well as pertussis toxin subunit S1 (ptxS1), for the detection of Bordetella species was evaluated during a pertussis outbreak. The sensitivity and specificity were 77 and 88% (PCR) and 66 and 100% (culture), respectively. All patients with an IS481 CT of <30 also tested positive by ptxS1 assay and were clinical pertussis cases. No patients with IS481 CT values of ≥40 tested positive by culture. Therefore, we recommend that culture be performed only for specimens with IS481 CT values of 30 ≤ CT <40. PMID:24131698
Hybrid Compounds as Anti-infective Agents.
Sbaraglini, María Laura; Talevi, Alan
2017-01-01
Hybrid drugs are multi-target chimeric chemicals combining two or more drugs or pharmacophores covalently linked in a single molecule. In the field of anti-infective agents, they have been proposed as a possible solution to drug resistance issues, presumably having a broader spectrum of activity and less probability of eliciting high level resistance linked to single gene product. Although less frequently explored, they could also be useful in the treatment of frequently occurring co-infections. Here, we overview recent advances in the field of hybrid antimicrobials. Furthermore, we discuss some cutting-edge approaches to face the development of designed multi-target agents in the era of omics and big data, namely analysis of gene signatures and multitask QSAR models. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Optimal path planning for video-guided smart munitions via multitarget tracking
NASA Astrophysics Data System (ADS)
Borkowski, Jeffrey M.; Vasquez, Juan R.
2006-05-01
An advent in the development of smart munitions entails autonomously modifying target selection during flight in order to maximize the value of the target being destroyed. A unique guidance law can be constructed that exploits both attribute and kinematic data obtained from an onboard video sensor. An optimal path planning algorithm has been developed with the goals of obstacle avoidance and maximizing the value of the target impacted by the munition. Target identification and classification provides a basis for target value which is used in conjunction with multi-target tracks to determine an optimal waypoint for the munition. A dynamically feasible trajectory is computed to provide constraints on the waypoint selection. Results demonstrate the ability of the autonomous system to avoid moving obstacles and revise target selection in flight.
Geerts, Hugo; Kennis, Ludo
2014-01-01
Clinical development in brain diseases has one of the lowest success rates in the pharmaceutical industry, and many promising rationally designed single-target R&D projects fail in expensive Phase III trials. By contrast, successful older CNS drugs do have a rich pharmacology. This article will provide arguments suggesting that highly selective single-target drugs are not sufficiently powerful to restore complex neuronal circuit homeostasis. A rationally designed multitarget project can be derisked by dialing in an additional symptomatic treatment effect on top of a disease modification target. Alternatively, we expand upon a hypothetical workflow example using a humanized computer-based quantitative systems pharmacology platform. The hope is that incorporating rationally multipharmacology drug discovery could potentially lead to more impactful polypharmacy drugs.
A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags
NASA Astrophysics Data System (ADS)
Meng, S.; Xie, X.
2015-12-01
In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.
NASA Astrophysics Data System (ADS)
Ortuso, Francesco; Bagetta, Donatella; Maruca, Annalisa; Talarico, Carmine; Bolognesi, Maria L.; Haider, Norbert; Borges, Fernanda; Bryant, Sharon; Langer, Thierry; Senderowitz, Hanoch; Alcaro, Stefano
2018-04-01
Abstract For every lead compound developed in medicinal chemistry research, numerous other inactive or less active candidates are synthetized/isolated and tested. The majority of these compounds will not be selected for further development due to a sub-optimal pharmacological profile. However, some poorly active or even inactive compounds could live a second life if tested against other targets. Thus, new therapeutic opportunities could emerge and synergistic activities could be identified and exploited for existing compounds by sharing information between researchers who are working on different targets. The Mu.Ta.Lig (Multi-Target Ligand) Chemotheca database aims to offer such opportunities by facilitating information exchange among researchers worldwide. After a preliminary registration, users can (a) virtually upload structures and activity data for their compounds with corresponding, and eventually known activity data, and (b) search for other available compounds uploaded by the users community. Each piece of information about given compounds is owned by the user who initially uploaded it and multiple ownership is possible (occurs if different users uploaded the same compounds or information pertaining to the same compounds). A web-based graphical user interface has been developed to assist compound uploading, compounds searching and data retrieval. Physico-chemical and ADME properties as well as substructure-based PAINS evaluations are computed on the fly for each uploaded compound. Samples of compounds that match a set of search criteria and additional data on these compounds could be requested directly from their owners with no mediation by the Mu.Ta.Lig Chemotheca team. Guest access provides a simplified search interface to retrieve only basic information such as compound IDs and related 2D or 3D chemical structures. Moreover, some compounds can be hidden from Guest users according to an owner’s decision. In contrast, registered users have full access to all of the Chemotheca data including the permission to upload new compounds and/or update experimental/theoretical data (e.g., activities against new targets tested) related to already stored compounds. In order to facilitate scientific collaborations, all available data are connected to the corresponding owner’s email address (available for registered users only). The Chemotheca web site is accessible at http://chemotheca.unicz.it.
Ortuso, Francesco; Bagetta, Donatella; Maruca, Annalisa; Talarico, Carmine; Bolognesi, Maria L; Haider, Norbert; Borges, Fernanda; Bryant, Sharon; Langer, Thierry; Senderowitz, Hanoch; Alcaro, Stefano
2018-01-01
For every lead compound developed in medicinal chemistry research, numerous other inactive or less active candidates are synthetized/isolated and tested. The majority of these compounds will not be selected for further development due to a sub-optimal pharmacological profile. However, some poorly active or even inactive compounds could live a second life if tested against other targets. Thus, new therapeutic opportunities could emerge and synergistic activities could be identified and exploited for existing compounds by sharing information between researchers who are working on different targets. The Mu.Ta.Lig (Multi-Target Ligand) Chemotheca database aims to offer such opportunities by facilitating information exchange among researchers worldwide. After a preliminary registration, users can (a) virtually upload structures and activity data for their compounds with corresponding, and eventually known activity data, and (b) search for other available compounds uploaded by the users community. Each piece of information about given compounds is owned by the user who initially uploaded it and multiple ownership is possible (this occurs if different users uploaded the same compounds or information pertaining to the same compounds). A web-based graphical user interface has been developed to assist compound uploading, compounds searching and data retrieval. Physico-chemical and ADME properties as well as substructure-based PAINS evaluations are computed on the fly for each uploaded compound. Samples of compounds that match a set of search criteria and additional data on these compounds could be requested directly from their owners with no mediation by the Mu.Ta.Lig Chemotheca team. Guest access provides a simplified search interface to retrieve only basic information such as compound IDs and related 2D or 3D chemical structures. Moreover, some compounds can be hidden to Guest users according to an owner's decision. In contrast, registered users have full access to all of the Chemotheca data including the permission to upload new compounds and/or update experimental/theoretical data (e.g., activities against new targets tested) related to already stored compounds. In order to facilitate scientific collaborations, all available data are connected to the corresponding owner's email address (available for registered users only). The Chemotheca web site is accessible at http://chemotheca.unicz.it.
Incorporating Target Priorities in the Sensor Tasking Reward Function
NASA Astrophysics Data System (ADS)
Gehly, S.; Bennett, J.
2016-09-01
Orbital debris tracking poses many challenges, most fundamentally the need to track a large number of objects from a limited number of sensors. The use of information theoretic sensor allocation provides a means to efficiently collect data on the multitarget system. An additional need of the community is the ability to specify target priorities, driven both by user needs and environmental factors such as collision warnings. This research develops a method to incorporate target priorities in the sensor tasking reward function, allowing for several applications in different tasking modes such as catalog maintenance, calibration, and collision monitoring. A set of numerical studies is included to demonstrate the functionality of the method.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
Straube, Andreas; Aicher, Bernhard; Fiebich, Bernd L; Haag, Gunther
2011-03-31
Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix") are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics. In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect.As an example the fixed-dose combination of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy. Multitarget therapeutics like combined analgesics broaden the array of therapeutic options, enable the completeness of the therapeutic effect, and allow doctors (and, in self-medication with OTC medications, the patients themselves) to customize treatment to the patient's specific needs. There is substantial clinical evidence that such a multi-component therapy is more effective than mono-component therapies.
2011-01-01
Background Pain in general and headache in particular are characterized by a change in activity in brain areas involved in pain processing. The therapeutic challenge is to identify drugs with molecular targets that restore the healthy state, resulting in meaningful pain relief or even freedom from pain. Different aspects of pain perception, i.e. sensory and affective components, also explain why there is not just one single target structure for therapeutic approaches to pain. A network of brain areas ("pain matrix") are involved in pain perception and pain control. This diversification of the pain system explains why a wide range of molecularly different substances can be used in the treatment of different pain states and why in recent years more and more studies have described a superior efficacy of a precise multi-target combination therapy compared to therapy with monotherapeutics. Discussion In this article, we discuss the available literature on the effects of several fixed-dose combinations in the treatment of headaches and discuss the evidence in support of the role of combination therapy in the pharmacotherapy of pain, particularly of headaches. The scientific rationale behind multi-target combinations is the therapeutic benefit that could not be achieved by the individual constituents and that the single substances of the combinations act together additively or even multiplicatively and cooperate to achieve a completeness of the desired therapeutic effect. As an example the fixesd-dose combination of acetylsalicylic acid (ASA), paracetamol (acetaminophen) and caffeine is reviewed in detail. The major advantage of using such a fixed combination is that the active ingredients act on different but distinct molecular targets and thus are able to act on more signalling cascades involved in pain than most single analgesics without adding more side effects to the therapy. Summary Multitarget therapeutics like combined analgesics broaden the array of therapeutic options, enable the completeness of the therapeutic effect, and allow doctors (and, in self-medication with OTC medications, the patients themselves) to customize treatment to the patient's specific needs. There is substantial clinical evidence that such a multi-component therapy is more effective than mono-component therapies. PMID:21453539
WHO Expert Committee on Specifications for Pharmaceutical Preparations.
2012-01-01
The Expert Committee on Specifications for Pharmaceutical Preparations works towards clear, independent and practical standards and guidelines for the quality assurance of medicines. Standards are developed by the Committee through worldwide consultation and an international consensus-building process. The following new guidelines were adopted and recommended for use: Development of monographs for The International Pharmacopoeia; WHO good manufacturing practices: water for pharmaceutical use; Pharmaceutical development of multisource (generic) pharmaceutical products--points to consider; Guidelines on submission of documentation for a multisource (generic) finished pharmaceutical product for the WHO Prequalification of Medicines Programme: quality part; Development of paediatric medicines: points to consider in formulation; Recommendations for quality requirements for artemisinin as a starting material in the production of antimalarial active pharmaceutical ingredients.
Evaluation of the Maximum Allowable Cost Program
Lee, A. James; Hefner, Dennis; Dobson, Allen; Hardy, Ralph
1983-01-01
This article summarizes an evaluation of the Maximum Allowable Cost (MAC)-Estimated Acquisition Cost (EAC) program, the Federal Government's cost-containment program for prescription drugs.1 The MAC-EAC regulations which became effective on August 26, 1976, have four major components: (1) Maximum Allowable Cost reimbursement limits for selected multisource or generically available drugs; (2) Estimated Acquisition Cost reimbursement limits for all drugs; (3) “usual and customary” reimbursement limits for all drugs; and (4) a directive that professional fee studies be performed by each State. The study examines the benefits and costs of the MAC reimbursement limits for 15 dosage forms of five multisource drugs and EAC reimbursement limits for all drugs for five selected States as of 1979. PMID:10309857
GUENDELMAN, MAYA D.; OWENS, ELIZABETH B.; GALÁN, CHARDEE; GARD, ARIANNA; HINSHAW, STEPHEN P.
2016-01-01
We examined whether maltreatment experienced in childhood and/or adolescence prospectively predicts young adult functioning in a diverse and well-characterized sample of females with childhood-diagnosed attention-deficit/hyperactivity disorder (N = 140). Participants were part of a longitudinal study and carefully evaluated in childhood, adolescence, and young adulthood (Mage = 9.6, 14.3, and 19.7 years, respectively), with high retention rates across time. A thorough review of multisource data reliably established maltreatment status for each participant (Mκ = 0.78). Thirty-two (22.9%) participants experienced at least one maltreatment type (physical abuse, sexual abuse, or neglect). Criterion variables included a broad array of young adult measures of functioning gleaned from multiple-source, multiple-informant instruments. With stringent statistical control of demographic, prenatal, and family status characteristics as well as baseline levels of the criterion variable in question, maltreated participants were significantly more impaired than nonmaltreated participants with respect to self-harm (suicide attempts), internalizing symptomatology (anxiety and depression), eating disorder symptomatology, and well-being (lower overall self-worth). Effect sizes were medium. Comprising the first longitudinal evidence linking maltreatment with key young adult life impairments among a carefully diagnosed and followed sample of females with attention-deficit/hyperactivity disorder, these findings underscore the clinical importance of trauma experiences within this population. PMID:25723055
Guendelman, Maya D; Owens, Elizabeth B; Galán, Chardee; Gard, Arianna; Hinshaw, Stephen P
2016-02-01
We examined whether maltreatment experienced in childhood and/or adolescence prospectively predicts young adult functioning in a diverse and well-characterized sample of females with childhood-diagnosed attention-deficit/hyperactivity disorder (N = 140). Participants were part of a longitudinal study and carefully evaluated in childhood, adolescence, and young adulthood (M age = 9.6, 14.3, and 19.7 years, respectively), with high retention rates across time. A thorough review of multisource data reliably established maltreatment status for each participant (M κ = 0.78). Thirty-two (22.9%) participants experienced at least one maltreatment type (physical abuse, sexual abuse, or neglect). Criterion variables included a broad array of young adult measures of functioning gleaned from multiple-source, multiple-informant instruments. With stringent statistical control of demographic, prenatal, and family status characteristics as well as baseline levels of the criterion variable in question, maltreated participants were significantly more impaired than nonmaltreated participants with respect to self-harm (suicide attempts), internalizing symptomatology (anxiety and depression), eating disorder symptomatology, and well-being (lower overall self-worth). Effect sizes were medium. Comprising the first longitudinal evidence linking maltreatment with key young adult life impairments among a carefully diagnosed and followed sample of females with attention-deficit/hyperactivity disorder, these findings underscore the clinical importance of trauma experiences within this population.
Braiding by Majorana tracking and long-range CNOT gates with color codes
NASA Astrophysics Data System (ADS)
Litinski, Daniel; von Oppen, Felix
2017-11-01
Color-code quantum computation seamlessly combines Majorana-based hardware with topological error correction. Specifically, as Clifford gates are transversal in two-dimensional color codes, they enable the use of the Majoranas' non-Abelian statistics for gate operations at the code level. Here, we discuss the implementation of color codes in arrays of Majorana nanowires that avoid branched networks such as T junctions, thereby simplifying their realization. We show that, in such implementations, non-Abelian statistics can be exploited without ever performing physical braiding operations. Physical braiding operations are replaced by Majorana tracking, an entirely software-based protocol which appropriately updates the Majoranas involved in the color-code stabilizer measurements. This approach minimizes the required hardware operations for single-qubit Clifford gates. For Clifford completeness, we combine color codes with surface codes, and use color-to-surface-code lattice surgery for long-range multitarget CNOT gates which have a time overhead that grows only logarithmically with the physical distance separating control and target qubits. With the addition of magic state distillation, our architecture describes a fault-tolerant universal quantum computer in systems such as networks of tetrons, hexons, or Majorana box qubits, but can also be applied to nontopological qubit platforms.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
Personality Disorder Symptoms Are Differentially Related to Divorce Frequency
Disney, Krystle L.; Weinstein, Yana; Oltmanns, Thomas F.
2013-01-01
Divorce is associated with a multitude of outcomes related to health and well-being. Data from a representative community sample (N = 1,241) of St. Louis residents (ages 55–64) were used to examine associations between personality pathology and divorce in late midlife. Symptoms of the 10 DSM–IV personality disorders were assessed with the Structured Interview for DSM–IV Personality and the Multisource Assessment of Personality Pathology (both self and informant versions). Multiple regression analyses showed Paranoid and Histrionic personality disorder symptoms to be consistently and positively associated with number of divorces across all three sources of personality assessment. Conversely, Avoidant personality disorder symptoms were negatively associated with number of divorces. The present paper provides new information about the relationship between divorce and personality pathology at a developmental stage that is understudied in both domains. PMID:23244459
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J
2017-03-03
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.
2017-01-01
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796
Mid-course multi-target tracking using continuous representation
NASA Technical Reports Server (NTRS)
Zak, Michail; Toomarian, Nikzad
1991-01-01
The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.
Exploring Multitarget Interactions to Reduce Opiate Withdrawal Syndrome and Psychiatric Comorbidity
2013-01-01
Opioid addiction is often characterized as a chronic relapsing condition due to the severe somatic and behavioral signs, associated with depressive disorders, triggered by opiate withdrawal. Since prolonged abstinence remains a major challenge, our interest has been addressed to such objective. Exploring multitarget interactions, the present investigation suggests that 3 or its (S)-enantiomer and 4, endowed with effective α2C-AR agonism/α2A-AR antagonism/5-HT1A-R agonism, or 7 and 9–11 producing efficacious α2C-AR agonism/α2A-AR antagonism/I2–IBS interaction might represent novel multifunctional tools potentially useful for reducing withdrawal syndrome and associated depression. Such agents, lacking in sedative side effects due to their α2A-AR antagonism, might afford an improvement over current therapies with clonidine-like drugs. PMID:24900763
CADD Modeling of Multi-Target Drugs Against Alzheimer's Disease.
Ambure, Pravin; Roy, Kunal
2017-01-01
Alzheimer's disease (AD) is a neurodegenerative disorder that is described by multiple factors linked with the progression of the disease. The currently approved drugs in the market are not capable of curing AD; instead, they merely provide symptomatic relief. Development of multi-target directed ligands (MTDLs) is an emerging strategy for improving the quality of the treatment against complex diseases like AD. Polypharmacology is a branch of pharmaceutical sciences that deals with the MTDL development. In this mini-review, we have summarized and discussed different strategies that are reported in the literature to design MTDLs for AD. Further, we have discussed the role of different in silico techniques and online resources in computer-aided drug discovery (CADD), for designing or identifying MTDLs against AD. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Kretova, Olga V; Chechetkin, Vladimir R; Fedoseeva, Daria M; Kravatsky, Yuri V; Sosin, Dmitri V; Alembekov, Ildar R; Gorbacheva, Maria A; Gashnikova, Natalya M; Tchurikov, Nickolai A
2017-02-01
Any method for silencing the activity of the HIV-1 retrovirus should tackle the extremely high variability of HIV-1 sequences and mutational escape. We studied sequence variability in the vicinity of selected RNA interference (RNAi) targets from isolates of HIV-1 subtype A in Russia, and we propose that using artificial RNAi is a potential alternative to traditional antiretroviral therapy. We prove that using multiple RNAi targets overcomes the variability in HIV-1 isolates. The optimal number of targets critically depends on the conservation of the target sequences. The total number of targets that are conserved with a probability of 0.7-0.8 should exceed at least 2. Combining deep sequencing and multitarget RNAi may provide an efficient approach to cure HIV/AIDS.
Pi, Liqun; Li, Xiang; Cao, Yiwei; Wang, Canhua; Pan, Liangwen; Yang, Litao
2015-04-01
Reference materials are important in accurate analysis of genetically modified organism (GMO) contents in food/feeds, and development of novel reference plasmid is a new trend in the research of GMO reference materials. Herein, we constructed a novel multi-targeting plasmid, pSOY, which contained seven event-specific sequences of five GM soybeans (MON89788-5', A2704-12-3', A5547-127-3', DP356043-5', DP305423-3', A2704-12-5', and A5547-127-5') and sequence of soybean endogenous reference gene Lectin. We evaluated the specificity, limit of detection and quantification, and applicability of pSOY in both qualitative and quantitative PCR analyses. The limit of detection (LOD) was as low as 20 copies in qualitative PCR, and the limit of quantification (LOQ) in quantitative PCR was 10 copies. In quantitative real-time PCR analysis, the PCR efficiencies of all event-specific and Lectin assays were higher than 90%, and the squared regression coefficients (R(2)) were more than 0.999. The quantification bias varied from 0.21% to 19.29%, and the relative standard deviations were from 1.08% to 9.84% in simulated samples analysis. All the results demonstrated that the developed multi-targeting plasmid, pSOY, was a credible substitute of matrix reference materials, and could be used as a reliable reference calibrator in the identification and quantification of multiple GM soybean events.
Predicting targets of compounds against neurological diseases using cheminformatic methodology
NASA Astrophysics Data System (ADS)
Nikolic, Katarina; Mavridis, Lazaros; Bautista-Aguilera, Oscar M.; Marco-Contelles, José; Stark, Holger; do Carmo Carreiras, Maria; Rossi, Ilaria; Massarelli, Paola; Agbaba, Danica; Ramsay, Rona R.; Mitchell, John B. O.
2015-02-01
Recently developed multi-targeted ligands are novel drug candidates able to interact with monoamine oxidase A and B; acetylcholinesterase and butyrylcholinesterase; or with histamine N-methyltransferase and histamine H3-receptor (H3R). These proteins are drug targets in the treatment of depression, Alzheimer's disease, obsessive disorders, and Parkinson's disease. A probabilistic method, the Parzen-Rosenblatt window approach, was used to build a "predictor" model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Molecular structures were represented based on the circular fingerprint methodology. The same approach was used to build a "predictor" model from the DrugBank dataset to determine the main pharmacological groups of the compound. The study of off-target interactions is now recognised as crucial to the understanding of both drug action and toxicology. Primary pharmaceutical targets and off-targets for the novel multi-target ligands were examined by use of the developed cheminformatic method. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. The cheminformatic targets identifications were in agreement with four 3D-QSAR (H3R/D1R/D2R/5-HT2aR) models and by in vitro assays for serotonin 5-HT1a and 5-HT2a receptor binding of the most promising ligand ( 71/MBA-VEG8).
Capurro, Valeria; Busquet, Perrine; Lopes, Joao Pedro; Bertorelli, Rosalia; Tarozzo, Glauco; Bolognesi, Maria Laura; Piomelli, Daniele; Reggiani, Angelo; Cavalli, Andrea
2013-01-01
Alzheimer's disease (AD) is characterized by progressive loss of cognitive function, dementia and altered behavior. Over 30 million people worldwide suffer from AD and available therapies are still palliative rather than curative. Recently, Memoquin (MQ), a quinone-bearing polyamine compound, has emerged as a promising anti-AD lead candidate, mainly thanks to its multi-target profile. MQ acts as an acetylcholinesterase and β-secretase-1 inhibitor, and also possesses anti-amyloid and anti-oxidant properties. Despite this potential interest, in vivo behavioral studies with MQ have been limited. Here, we report on in vivo studies with MQ (acute and sub-chronic treatments; 7-15 mg/kg per os) carried out using two different mouse models: i) scopolamine- and ii) beta-amyloid peptide- (Aβ-) induced amnesia. Several aspects related to memory were examined using the T-maze, the Morris water maze, the novel object recognition, and the passive avoidance tasks. At the dose of 15 mg/kg, MQ was able to rescue all tested aspects of cognitive impairment including spatial, episodic, aversive, short and long-term memory in both scopolamine- and Aβ-induced amnesia models. Furthermore, when tested in primary cortical neurons, MQ was able to fully prevent the Aβ-induced neurotoxicity mediated by oxidative stress. The results support the effectiveness of MQ as a cognitive enhancer, and highlight the value of a multi-target strategy to address the complex nature of cognitive dysfunction in AD.
Capurro, Valeria; Busquet, Perrine; Lopes, Joao Pedro; Bertorelli, Rosalia; Tarozzo, Glauco; Bolognesi, Maria Laura; Piomelli, Daniele; Reggiani, Angelo; Cavalli, Andrea
2013-01-01
Alzheimer's disease (AD) is characterized by progressive loss of cognitive function, dementia and altered behavior. Over 30 million people worldwide suffer from AD and available therapies are still palliative rather than curative. Recently, Memoquin (MQ), a quinone-bearing polyamine compound, has emerged as a promising anti-AD lead candidate, mainly thanks to its multi-target profile. MQ acts as an acetylcholinesterase and β-secretase-1 inhibitor, and also possesses anti-amyloid and anti-oxidant properties. Despite this potential interest, in vivo behavioral studies with MQ have been limited. Here, we report on in vivo studies with MQ (acute and sub-chronic treatments; 7–15 mg/kg per os) carried out using two different mouse models: i) scopolamine- and ii) beta-amyloid peptide- (Aβ-) induced amnesia. Several aspects related to memory were examined using the T-maze, the Morris water maze, the novel object recognition, and the passive avoidance tasks. At the dose of 15 mg/kg, MQ was able to rescue all tested aspects of cognitive impairment including spatial, episodic, aversive, short and long-term memory in both scopolamine- and Aβ-induced amnesia models. Furthermore, when tested in primary cortical neurons, MQ was able to fully prevent the Aβ-induced neurotoxicity mediated by oxidative stress. The results support the effectiveness of MQ as a cognitive enhancer, and highlight the value of a multi-target strategy to address the complex nature of cognitive dysfunction in AD. PMID:23441223
Multitarget transcranial direct current stimulation for freezing of gait in Parkinson's disease.
Dagan, Moria; Herman, Talia; Harrison, Rachel; Zhou, Junhong; Giladi, Nir; Ruffini, Giulio; Manor, Brad; Hausdorff, Jeffrey M
2018-04-01
Recent findings suggest that transcranial direct current stimulation of the primary motor cortex may ameliorate freezing of gait. However, the effects of multitarget simultaneous stimulation of motor and cognitive networks are mostly unknown. The objective of this study was to evaluate the effects of multitarget transcranial direct current stimulation of the primary motor cortex and left dorsolateral prefrontal cortex on freezing of gait and related outcomes. Twenty patients with Parkinson's disease and freezing of gait received 20 minutes of transcranial direct current stimulation on 3 separate visits. Transcranial direct current stimulation targeted the primary motor cortex and left dorsolateral prefrontal cortex simultaneously, primary motor cortex only, or sham stimulation (order randomized and double-blinded assessments). Participants completed a freezing of gait-provoking test, the Timed Up and Go, and the Stroop test before and after each transcranial direct current stimulation session. Performance on the freezing of gait-provoking test (P = 0.010), Timed Up and Go (P = 0.006), and the Stroop test (P = 0.016) improved after simultaneous stimulation of the primary motor cortex and left dorsolateral prefrontal cortex, but not after primary motor cortex only or sham stimulation. Transcranial direct current stimulation designed to simultaneously target motor and cognitive regions apparently induces immediate aftereffects in the brain that translate into reduced freezing of gait and improvements in executive function and mobility. © 2018 International Parkinson and Movement Disorder Society. © 2018 International Parkinson and Movement Disorder Society.
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
Test-retest reliability of an fMRI paradigm for studies of cardiovascular reactivity.
Sheu, Lei K; Jennings, J Richard; Gianaros, Peter J
2012-07-01
We examined the reliability of measures of fMRI, subjective, and cardiovascular reactions to standardized versions of a Stroop color-word task and a multisource interference task. A sample of 14 men and 12 women (30-49 years old) completed the tasks on two occasions, separated by a median of 88 days. The reliability of fMRI BOLD signal changes in brain areas engaged by the tasks was moderate, and aggregating fMRI BOLD signal changes across the tasks improved test-retest reliability metrics. These metrics included voxel-wise intraclass correlation coefficients (ICCs) and overlap ratio statistics. Task-aggregated ratings of subjective arousal, valence, and control, as well as cardiovascular reactions evoked by the tasks showed ICCs of 0.57 to 0.87 (ps < .001), indicating moderate-to-strong reliability. These findings support using these tasks as a battery for fMRI studies of cardiovascular reactivity. Copyright © 2012 Society for Psychophysiological Research.
NASA Technical Reports Server (NTRS)
Brooks, Colin; Bourgeau-Chavez, Laura; Endres, Sarah; Battaglia, Michael; Shuchman, Robert
2015-01-01
Assist with the evaluation and measuring of wetlands hydroperiod at the Plum Brook Station using multi-source remote sensing data as part of a larger effort on projecting climate change-related impacts on the station's wetland ecosystems. MTRI expanded on the multi-source remote sensing capabilities to help estimate and measure hydroperiod and the relative soil moisture of wetlands at NASA's Plum Brook Station. Multi-source remote sensing capabilities are useful in estimating and measuring hydroperiod and relative soil moisture of wetlands. This is important as a changing regional climate has several potential risks for wetland ecosystem function. The year two analysis built on the first year of the project by acquiring and analyzing remote sensing data for additional dates and types of imagery, combined with focused field work. Five deliverables were planned and completed: (1) Show the relative length of hydroperiod using available remote sensing datasets, (2) Date linked table of wetlands extent over time for all feasible non-forested wetlands, (3) Utilize LIDAR data to measure topographic height above sea level of all wetlands, wetland to catchment area radio, slope of wetlands, and other useful variables (4), A demonstration of how analyzed results from multiple remote sensing data sources can help with wetlands vulnerability assessment; and (5) A MTRI style report summarizing year 2 results.
Multi-source energy harvester to power sensing hardware on rotating structures
NASA Astrophysics Data System (ADS)
Schlichting, Alexander; Ouellette, Scott; Carlson, Clinton; Farinholt, Kevin M.; Park, Gyuhae; Farrar, Charles R.
2010-04-01
The U.S. Department of Energy (DOE) proposes to meet 20% of the nation's energy needs through wind power by the year 2030. To accomplish this goal, the industry will need to produce larger (>100m diameter) turbines to increase efficiency and maximize energy production. It will be imperative to instrument the large composite structures with onboard sensing to provide structural health monitoring capabilities to understand the global response and integrity of these systems as they age. A critical component in the deployment of such a system will be a robust power source that can operate for the lifespan of the wind turbine. In this paper we consider the use of discrete, localized power sources that derive energy from the ambient (solar, thermal) or operational (kinetic) environment. This approach will rely on a multi-source configuration that scavenges energy from photovoltaic and piezoelectric transducers. Each harvester is first characterized individually in the laboratory and then they are combined through a multi-source power conditioner that is designed to combine the output of each harvester in series to power a small wireless sensor node that has active-sensing capabilities. The advantages/disadvantages of each approach are discussed, along with the proposed design for a field ready energy harvester that will be deployed on a small-scale 19.8m diameter wind turbine.
Analysis of flood inundation in ungauged basins based on multi-source remote sensing data.
Gao, Wei; Shen, Qiu; Zhou, Yuehua; Li, Xin
2018-02-09
Floods are among the most expensive natural hazards experienced in many places of the world and can result in heavy losses of life and economic damages. The objective of this study is to analyze flood inundation in ungauged basins by performing near-real-time detection with flood extent and depth based on multi-source remote sensing data. Via spatial distribution analysis of flood extent and depth in a time series, the inundation condition and the characteristics of flood disaster can be reflected. The results show that the multi-source remote sensing data can make up the lack of hydrological data in ungauged basins, which is helpful to reconstruct hydrological sequence; the combination of MODIS (moderate-resolution imaging spectroradiometer) surface reflectance productions and the DFO (Dartmouth Flood Observatory) flood database can achieve the macro-dynamic monitoring of the flood inundation in ungauged basins, and then the differential technique of high-resolution optical and microwave images before and after floods can be used to calculate flood extent to reflect spatial changes of inundation; the monitoring algorithm for the flood depth combining RS and GIS is simple and easy and can quickly calculate the depth with a known flood extent that is obtained from remote sensing images in ungauged basins. Relevant results can provide effective help for the disaster relief work performed by government departments.
Moen, Spencer O.; Smith, Eric; Raymond, Amy C.; Fairman, James W.; Stewart, Lance J.; Staker, Bart L.; Begley, Darren W.; Edwards, Thomas E.; Lorimer, Donald D.
2013-01-01
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year 1. Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans 2. Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains. PMID:23851357
Adoptive T cell cancer therapy
NASA Astrophysics Data System (ADS)
Dzhandzhugazyan, Karine N.; Guldberg, Per; Kirkin, Alexei F.
2018-06-01
Tumour heterogeneity and off-target toxicity are current challenges of cancer immunotherapy. Karine Dzhandzhugazyan, Per Guldberg and Alexei Kirkin discuss how epigenetic induction of tumour antigens in antigen-presenting cells may form the basis for multi-target therapies.
Wang, Qi; Xie, Zhiyi; Li, Fangbai
2015-11-01
This study aims to identify and apportion multi-source and multi-phase heavy metal pollution from natural and anthropogenic inputs using ensemble models that include stochastic gradient boosting (SGB) and random forest (RF) in agricultural soils on the local scale. The heavy metal pollution sources were quantitatively assessed, and the results illustrated the suitability of the ensemble models for the assessment of multi-source and multi-phase heavy metal pollution in agricultural soils on the local scale. The results of SGB and RF consistently demonstrated that anthropogenic sources contributed the most to the concentrations of Pb and Cd in agricultural soils in the study region and that SGB performed better than RF. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gomez, Rapson; Burns, G Leonard; Walsh, James A; Hafetz, Nina
2005-04-01
Confirmatory factor analysis (CFA) was used to model a multitrait by multisource matrix to determine the convergent and discriminant validity of measures of attention-deficit hyperactivity disorder (ADHD)-inattention (IN), ADHD-hyperactivity/impulsivity (HI), and oppositional defiant disorder (ODD) in 917 Malaysian elementary school children. The three trait factors were ADHD-IN, ADHDHI, and ODD. The two source factors were parents and teachers. Similar to earlier studies with Australian and Brazilian children, the parent and teacher measures failed to show convergent and discriminant validity with Malaysian children. The study outlines the implications of such strong source effects in ADHD-IN, ADHD-HI, and ODD measures for the use of such parent and teacher scales to study the symptom dimensions.
Connected Vehicle Applications : Mobility
DOT National Transportation Integrated Search
2017-03-03
Connected vehicle mobility applications are commonly referred to as dynamic mobility applications (DMAs). DMAs seek to fully leverage frequently collected and rapidly disseminated multi-source data gathered from connected travelers, vehicles, and inf...
Antibacterial Drug Leads: DNA and Enzyme Multitargeting
Zhu, Wei; Wang, Yang; Li, Kai; ...
2015-01-09
Here, we report the results of an investigation of the activity of a series of amidine and bisamidine compounds against Staphylococcus aureus and Escherichia coli. The most active compounds bound to an AT-rich DNA dodecamer (CGCGAATTCGCG) 2 and using DSC were found to increase the melting transition by up to 24 °C. Several compounds also inhibited undecaprenyl diphosphate synthase (UPPS) with IC 50 values of 100–500 nM, and we found good correlations (R 2 = 0.89, S. aureus; R 2 = 0.79, E. coli) between experimental and predicted cell growth inhibition by using DNA ΔT m and UPPS IC 50more » experimental results together with one computed descriptor. Finally, we also solved the structures of three bisamidines binding to DNA as well as three UPPS structures. Overall, the results are of general interest in the context of the development of resistance-resistant antibiotics that involve multitargeting.« less
Multitarget detection algorithm for automotive FMCW radar
NASA Astrophysics Data System (ADS)
Hyun, Eugin; Oh, Woo-Jin; Lee, Jong-Hun
2012-06-01
Today, 77 GHz FMCW (Frequency Modulation Continuous Wave) radar has strong advantages of range and velocity detection for automotive applications. However, FMCW radar brings out ghost targets and missed targets in multi-target situations. In this paper, in order to resolve these limitations, we propose an effective pairing algorithm, which consists of two steps. In the proposed method, a waveform with different slopes in two periods is used. In the 1st pairing processing, all combinations of range and velocity are obtained in each of two wave periods. In the 2nd pairing step, using the results of the 1st pairing processing, fine range and velocity are detected. In that case, we propose the range-velocity windowing technique in order to compensate for the non-ideal beat-frequency characteristic that arises due to the non-linearity of the RF module. Based on experimental results, the performance of the proposed algorithm is improved compared with that of the typical method.
Discovery of multi-target receptor tyrosine kinase inhibitors as novel anti-angiogenesis agents
NASA Astrophysics Data System (ADS)
Wang, Jinfeng; Zhang, Lin; Pan, Xiaoyan; Dai, Bingling; Sun, Ying; Li, Chuansheng; Zhang, Jie
2017-03-01
Recently, we have identified a biphenyl-aryl urea incorporated with salicylaldoxime (BPS-7) as an anti-angiogenesis agent. Herein, we disclosed a series of novel anti-angiogenesis agents with BPS-7 as lead compound through combining diarylureas with N-pyridin-2-ylcyclopropane carboxamide. Several title compounds exhibited simultaneous inhibition effects against three pro-angiogenic RTKs (VEGFR-2, TIE-2 and EphB4). Some of them displayed potent anti-proliferative activity against human vascular endothelial cell (EA.hy926). In particular, two potent compounds (CDAU-1 and CDAU-2) could be considered as promising anti-angiogenesis agents with triplet inhibition profile. The biological evaluation and molecular docking results indicate that N-pyridin-2-ylcyclopropane carboxamide could serve as a hinge-binding group (HBG) for the discovery of multi-target anti-angiogenesis agents. CDAU-2 also exhibited promising anti-angiogenic potency in a tissue model for angiogenesis.
Tassini, Sabrina; Sun, Liang; Lanko, Kristina; Crespan, Emmanuele; Langron, Emily; Falchi, Federico; Kissova, Miroslava; Armijos-Rivera, Jorge I; Delang, Leen; Mirabelli, Carmen; Neyts, Johan; Pieroni, Marco; Cavalli, Andrea; Costantino, Gabriele; Maga, Giovanni; Vergani, Paola; Leyssen, Pieter; Radi, Marco
2017-02-23
Enteroviruses (EVs) are among the most frequent infectious agents in humans worldwide and represent the leading cause of upper respiratory tract infections. No drugs for the treatment of EV infections are currently available. Recent studies have also linked EV infection with pulmonary exacerbations, especially in cystic fibrosis (CF) patients, and the importance of this link is probably underestimated. The aim of this work was to develop a new class of multitarget agents active both as broad-spectrum antivirals and as correctors of the F508del-cystic fibrosis transmembrane conductance regulator (CFTR) folding defect responsible for >90% of CF cases. We report herein the discovery of the first small molecules able to simultaneously act as correctors of the F508del-CFTR folding defect and as broad-spectrum antivirals against a panel of EVs representative of all major species.
Marco-Contelles, José; León, Rafael; de los Ríos, Cristóbal; Samadi, Abdelouahid; Bartolini, Manuela; Andrisano, Vincenza; Huertas, Oscar; Barril, Xavier; Luque, F Javier; Rodríguez-Franco, María I; López, Beatriz; López, Manuela G; García, Antonio G; Carreiras, María do Carmo; Villarroya, Mercedes
2009-05-14
Tacripyrines (1-14) have been designed by combining an AChE inhibitor (tacrine) with a calcium antagonist such as nimodipine and are targeted to develop a multitarget therapeutic strategy to confront AD. Tacripyrines are selective and potent AChE inhibitors in the nanomolar range. The mixed type inhibition of hAChE activity of compound 11 (IC(50) 105 +/- 15 nM) is associated to a 30.7 +/- 8.6% inhibition of the proaggregating action of AChE on the Abeta and a moderate inhibition of Abeta self-aggregation (34.9 +/- 5.4%). Molecular modeling indicates that binding of compound 11 to the AChE PAS mainly involves the (R)-11 enantiomer, which also agrees with the noncompetitive inhibition mechanism exhibited by p-methoxytacripyrine 11. Tacripyrines are neuroprotective agents, show moderate Ca(2+) channel blocking effect, and cross the blood-brain barrier, emerging as lead candidates for treating AD.
The role of fragment-based and computational methods in polypharmacology.
Bottegoni, Giovanni; Favia, Angelo D; Recanatini, Maurizio; Cavalli, Andrea
2012-01-01
Polypharmacology-based strategies are gaining increased attention as a novel approach to obtaining potentially innovative medicines for multifactorial diseases. However, some within the pharmaceutical community have resisted these strategies because they can be resource-hungry in the early stages of the drug discovery process. Here, we report on fragment-based and computational methods that might accelerate and optimize the discovery of multitarget drugs. In particular, we illustrate that fragment-based approaches can be particularly suited for polypharmacology, owing to the inherent promiscuous nature of fragments. In parallel, we explain how computer-assisted protocols can provide invaluable insights into how to unveil compounds theoretically able to bind to more than one protein. Furthermore, several pragmatic aspects related to the use of these approaches are covered, thus offering the reader practical insights on multitarget-oriented drug discovery projects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Gejjalagere Honnappa, Chethan; Mazhuvancherry Kesavan, Unnikrishnan
2016-12-01
Inflammatory diseases are complex, multi-factorial outcomes of evolutionarily conserved tissue repair processes. For decades, non-steroidal anti-inflammatory drugs and cyclooxygenase inhibitors, the primary drugs of choice for the management of inflammatory diseases, addressed individual targets in the arachidonic acid pathway. Unsatisfactory safety and efficacy profiles of the above have necessitated the development of multi-target agents to treat complex inflammatory diseases. Current anti-inflammatory therapies still fall short of clinical needs and the clinical trial results of multi-target therapeutics are anticipated. Additionally, new drug targets are emerging with improved understanding of molecular mechanisms controlling the pathophysiology of inflammation. This review presents an outline of small molecules and drug targets in anti-inflammatory therapeutics with a summary of a newly identified target AMP-activated protein kinase, which constitutes a novel therapeutic pathway in inflammatory pathology. © The Author(s) 2016.
Discovery of multi-target receptor tyrosine kinase inhibitors as novel anti-angiogenesis agents
Wang, Jinfeng; Zhang, Lin; Pan, Xiaoyan; Dai, Bingling; Sun, Ying; Li, Chuansheng; Zhang, Jie
2017-01-01
Recently, we have identified a biphenyl-aryl urea incorporated with salicylaldoxime (BPS-7) as an anti-angiogenesis agent. Herein, we disclosed a series of novel anti-angiogenesis agents with BPS-7 as lead compound through combining diarylureas with N-pyridin-2-ylcyclopropane carboxamide. Several title compounds exhibited simultaneous inhibition effects against three pro-angiogenic RTKs (VEGFR-2, TIE-2 and EphB4). Some of them displayed potent anti-proliferative activity against human vascular endothelial cell (EA.hy926). In particular, two potent compounds (CDAU-1 and CDAU-2) could be considered as promising anti-angiogenesis agents with triplet inhibition profile. The biological evaluation and molecular docking results indicate that N-pyridin-2-ylcyclopropane carboxamide could serve as a hinge-binding group (HBG) for the discovery of multi-target anti-angiogenesis agents. CDAU-2 also exhibited promising anti-angiogenic potency in a tissue model for angiogenesis. PMID:28332573
Multisource inverse-geometry CT. Part I. System concept and development
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Man, Bruno, E-mail: deman@ge.com; Harrison, Dan
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: Themore » authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals.« less
Multisource inverse-geometry CT. Part I. System concept and development
De Man, Bruno; Uribe, Jorge; Baek, Jongduk; Harrison, Dan; Yin, Zhye; Longtin, Randy; Roy, Jaydeep; Waters, Bill; Wilson, Colin; Short, Jonathan; Inzinna, Lou; Reynolds, Joseph; Neculaes, V. Bogdan; Frutschy, Kristopher; Senzig, Bob; Pelc, Norbert
2016-01-01
Purpose: This paper presents an overview of multisource inverse-geometry computed tomography (IGCT) as well as the development of a gantry-based research prototype system. The development of the distributed x-ray source is covered in a companion paper [V. B. Neculaes et al., “Multisource inverse-geometry CT. Part II. X-ray source design and prototype,” Med. Phys. 43, 4617–4627 (2016)]. While progress updates of this development have been presented at conferences and in journal papers, this paper is the first comprehensive overview of the multisource inverse-geometry CT concept and prototype. The authors also provide a review of all previous IGCT related publications. Methods: The authors designed and implemented a gantry-based 32-source IGCT scanner with 22 cm field-of-view, 16 cm z-coverage, 1 s rotation time, 1.09 × 1.024 mm detector cell size, as low as 0.4 × 0.8 mm focal spot size and 80–140 kVp x-ray source voltage. The system is built using commercially available CT components and a custom made distributed x-ray source. The authors developed dedicated controls, calibrations, and reconstruction algorithms and evaluated the system performance using phantoms and small animals. Results: The authors performed IGCT system experiments and demonstrated tube current up to 125 mA with up to 32 focal spots. The authors measured a spatial resolution of 13 lp/cm at 5% cutoff. The scatter-to-primary ratio is estimated 62% for a 32 cm water phantom at 140 kVp. The authors scanned several phantoms and small animals. The initial images have relatively high noise due to the low x-ray flux levels but minimal artifacts. Conclusions: IGCT has unique benefits in terms of dose-efficiency and cone-beam artifacts, but comes with challenges in terms of scattered radiation and x-ray flux limits. To the authors’ knowledge, their prototype is the first gantry-based IGCT scanner. The authors summarized the design and implementation of the scanner and the authors presented results with phantoms and small animals. PMID:27487877
Multisource data fusion for documenting archaeological sites
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir; Chibunichev, Alexander; Zhuravlev, Denis
2017-10-01
The quality of archaeological sites documenting is of great importance for cultural heritage preserving and investigating. The progress in developing new techniques and systems for data acquisition and processing creates an excellent basis for achieving a new quality of archaeological sites documenting and visualization. archaeological data has some specific features which have to be taken into account when acquiring, processing and managing. First of all, it is a needed to gather as full as possible information about findings providing no loss of information and no damage to artifacts. Remote sensing technologies are the most adequate and powerful means which satisfy this requirement. An approach to archaeological data acquiring and fusion based on remote sensing is proposed. It combines a set of photogrammetric techniques for obtaining geometrical and visual information at different scales and detailing and a pipeline for archaeological data documenting, structuring, fusion, and analysis. The proposed approach is applied for documenting of Bosporus archaeological expedition of Russian State Historical Museum.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
Revealing Spatial Variation and Correlation of Urban Travels from Big Trajectory Data
NASA Astrophysics Data System (ADS)
Li, X.; Tu, W.; Shen, S.; Yue, Y.; Luo, N.; Li, Q.
2017-09-01
With the development of information and communication technology, spatial-temporal data that contain rich human mobility information are growing rapidly. However, the consistency of multi-mode human travel behind multi-source spatial-temporal data is not clear. To this aim, we utilized a week of taxies' and buses' GPS trajectory data and smart card data in Shenzhen, China to extract city-wide travel information of taxi, bus and metro and tested the correlation of multi-mode travel characteristics. Both the global correlation and local correlation of typical travel indicator were examined. The results show that: (1) Significant differences exist in of urban multi-mode travels. The correlation between bus travels and taxi travels, metro travel and taxi travels are globally low but locally high. (2) There are spatial differences of the correlation relationship between bus, metro and taxi travel. These findings help us understanding urban travels deeply therefore facilitate both the transport policy making and human-space interaction research.
A proposal to extend our understanding of the global economy
NASA Technical Reports Server (NTRS)
Hough, Robbin R.; Ehlers, Manfred
1991-01-01
Satellites acquire information on a global and repetitive basis. They are thus ideal tools for use when global scale and analysis over time is required. Data from satellites comes in digital form which means that it is ideally suited for incorporation in digital data bases and that it can be evaluated using automated techniques. The development of a global multi-source data set which integrates digital information is proposed regarding some 15,000 major industrial sites worldwide with remotely sensed images of the sites. The resulting data set would provide the basis for a wide variety of studies of the global economy. The preliminary results give promise of a new class of global policy model which is far more detailed and helpful to local policy makers than its predecessors. The central thesis of this proposal is that major industrial sites can be identified and their utilization can be tracked with the aid of satellite images.
NASA Astrophysics Data System (ADS)
Gao, Zhiqiang; Xu, Fuxiang; Song, Debin; Zheng, Xiangyu; Chen, Maosi
2017-09-01
This paper conducted dynamic monitoring over the green tide (large green alga—Ulva prolifera) occurred in the Yellow Sea in 2014 to 2016 by the use of multi-source remote sensing data, including GF-1 WFV, HJ-1A/1B CCD, CBERS-04 WFI, Landsat-7 ETM+ and Landsta-8 OLI, and by the combination of VB-FAH (index of Virtual-Baseline Floating macroAlgae Height) with manual assisted interpretation based on remote sensing and geographic information system technologies. The result shows that unmanned aerial vehicle (UAV) and shipborne platform could accurately monitor the distribution of Ulva prolifera in small spaces, and therefore provide validation data for the result of remote sensing monitoring over Ulva prolifera. The result of this research can provide effective information support for the prevention and control of Ulva prolifera.
Li, Wen-Jie; Zhang, Shi-Huang; Wang, Hui-Min
2011-12-01
Ecosystem services evaluation is a hot topic in current ecosystem management, and has a close link with human beings welfare. This paper summarized the research progress on the evaluation of ecosystem services based on geographic information system (GIS) and remote sensing (RS) technology, which could be reduced to the following three characters, i. e., ecological economics theory is widely applied as a key method in quantifying ecosystem services, GIS and RS technology play a key role in multi-source data acquisition, spatiotemporal analysis, and integrated platform, and ecosystem mechanism model becomes a powerful tool for understanding the relationships between natural phenomena and human activities. Aiming at the present research status and its inadequacies, this paper put forward an "Assembly Line" framework, which was a distributed one with scalable characteristics, and discussed the future development trend of the integration research on ecosystem services evaluation based on GIS and RS technologies.
Personality disorder symptoms are differentially related to divorce frequency.
Disney, Krystle L; Weinstein, Yana; Oltmanns, Thomas F
2012-12-01
Divorce is associated with a multitude of outcomes related to health and well-being. Data from a representative community sample (N = 1,241) of St. Louis residents (ages 55-64) were used to examine associations between personality pathology and divorce in late midlife. Symptoms of the 10 DSM-IV personality disorders were assessed with the Structured Interview for DSM-IV Personality and the Multisource Assessment of Personality Pathology (both self and informant versions). Multiple regression analyses showed Paranoid and Histrionic personality disorder symptoms to be consistently and positively associated with number of divorces across all three sources of personality assessment. Conversely, Avoidant personality disorder symptoms were negatively associated with number of divorces. The present paper provides new information about the relationship between divorce and personality pathology at a developmental stage that is understudied in both domains. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Using soft-hard fusion for misinformation detection and pattern of life analysis in OSINT
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Shabarekh, Charlotte
2017-05-01
Today's battlefields are shifting to "denied areas", where the use of U.S. Military air and ground assets is limited. To succeed, the U.S. intelligence analysts increasingly rely on available open-source intelligence (OSINT) which is fraught with inconsistencies, biased reporting and fake news. Analysts need automated tools for retrieval of information from OSINT sources, and these solutions must identify and resolve conflicting and deceptive information. In this paper, we present a misinformation detection model (MDM) which converts text to attributed knowledge graphs and runs graph-based analytics to identify misinformation. At the core of our solution is identification of knowledge conflicts in the fused multi-source knowledge graph, and semi-supervised learning to compute locally consistent reliability and credibility scores for the documents and sources, respectively. We present validation of proposed method using an open source dataset constructed from the online investigations of MH17 downing in Eastern Ukraine.
GEOGLAM Crop Monitor Assessment Tool: Developing Monthly Crop Condition Assessments
NASA Astrophysics Data System (ADS)
McGaughey, K.; Becker Reshef, I.; Barker, B.; Humber, M. L.; Nordling, J.; Justice, C. O.; Deshayes, M.
2014-12-01
The Group on Earth Observations (GEO) developed the Global Agricultural Monitoring initiative (GEOGLAM) to improve existing agricultural information through a network of international partnerships, data sharing, and operational research. This presentation will discuss the Crop Monitor component of GEOGLAM, which provides the Agricultural Market Information System (AMIS) with an international, multi-source, and transparent consensus assessment of crop growing conditions, status, and agro-climatic conditions likely to impact global production. This activity covers the four primary crop types (wheat, maize, rice, and soybean) within the main agricultural producing regions of the AMIS countries. These assessments have been produced operationally since September 2013 and are published in the AMIS Market Monitor Bulletin. The Crop Monitor reports provide cartographic and textual summaries of crop conditions as of the 28th of each month, according to crop type. This presentation will focus on the building of international networks, data collection, and data dissemination.
NASA Astrophysics Data System (ADS)
Coburn, C. A.; Qin, Y.; Zhang, J.; Staenz, K.
2015-12-01
Food security is one of the most pressing issues facing humankind. Recent estimates predict that over one billion people don't have enough food to meet their basic nutritional needs. The ability of remote sensing tools to monitor and model crop production and predict crop yield is essential for providing governments and farmers with vital information to ensure food security. Google Earth Engine (GEE) is a cloud computing platform, which integrates storage and processing algorithms for massive remotely sensed imagery and vector data sets. By providing the capabilities of storing and analyzing the data sets, it provides an ideal platform for the development of advanced analytic tools for extracting key variables used in regional and national food security systems. With the high performance computing and storing capabilities of GEE, a cloud-computing based system for near real-time crop land monitoring was developed using multi-source remotely sensed data over large areas. The system is able to process and visualize the MODIS time series NDVI profile in conjunction with Landsat 8 image segmentation for crop monitoring. With multi-temporal Landsat 8 imagery, the crop fields are extracted using the image segmentation algorithm developed by Baatz et al.[1]. The MODIS time series NDVI data are modeled by TIMESAT [2], a software package developed for analyzing time series of satellite data. The seasonality of MODIS time series data, for example, the start date of the growing season, length of growing season, and NDVI peak at a field-level are obtained for evaluating the crop-growth conditions. The system fuses MODIS time series NDVI data and Landsat 8 imagery to provide information of near real-time crop-growth conditions through the visualization of MODIS NDVI time series and comparison of multi-year NDVI profiles. Stakeholders, i.e., farmers and government officers, are able to obtain crop-growth information at crop-field level online. This unique utilization of GEE in combination with advanced analytic and extraction techniques provides a vital remote sensing tool for decision makers and scientists with a high-degree of flexibility to adapt to different uses.
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A
2016-10-26
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called "Collective Influence (CI)" has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes' significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct "virtual" information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes' importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.
Sánchez-Rodríguez, Aminael; Tejera, Eduardo; Cruz-Monteagudo, Maykel; Borges, Fernanda; Cordeiro, M. Natália D. S.; Le-Thi-Thu, Huong; Pham-The, Hai
2018-01-01
Gastric cancer is the third leading cause of cancer-related mortality worldwide and despite advances in prevention, diagnosis and therapy, it is still regarded as a global health concern. The efficacy of the therapies for gastric cancer is limited by a poor response to currently available therapeutic regimens. One of the reasons that may explain these poor clinical outcomes is the highly heterogeneous nature of this disease. In this sense, it is essential to discover new molecular agents capable of targeting various gastric cancer subtypes simultaneously. Here, we present a multi-objective approach for the ligand-based virtual screening discovery of chemical compounds simultaneously active against the gastric cancer cell lines AGS, NCI-N87 and SNU-1. The proposed approach relays in a novel methodology based on the development of ensemble models for the bioactivity prediction against each individual gastric cancer cell line. The methodology includes the aggregation of one ensemble per cell line using a desirability-based algorithm into virtual screening protocols. Our research leads to the proposal of a multi-targeted virtual screening protocol able to achieve high enrichment of known chemicals with anti-gastric cancer activity. Specifically, our results indicate that, using the proposed protocol, it is possible to retrieve almost 20 more times multi-targeted compounds in the first 1% of the ranked list than what is expected from a uniform distribution of the active ones in the virtual screening database. More importantly, the proposed protocol attains an outstanding initial enrichment of known multi-targeted anti-gastric cancer agents. PMID:29420638
Visual Analytics of integrated Data Systems for Space Weather Purposes
NASA Astrophysics Data System (ADS)
Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo
Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.
Velpuri, Naga Manohar; Senay, Gabriel B.; Rowland, James; Verdin, James P.; Alemu, Henok; Melesse, Assefa M.; Abtew, Wossenu; Setegn, Shimelis G.
2014-01-01
Continental Africa has the highest volume of water stored in wetlands, large lakes, reservoirs, and rivers, yet it suffers from problems such as water availability and access. With climate change intensifying the hydrologic cycle and altering the distribution and frequency of rainfall, the problem of water availability and access will increase further. Famine Early Warning Systems Network (FEWS NET) funded by the United States Agency for International Development (USAID) has initiated a large-scale project to monitor small to medium surface water points in Africa. Under this project, multisource satellite data and hydrologic modeling techniques are integrated to monitor several hundreds of small to medium surface water points in Africa. This approach has been already tested to operationally monitor 41 water points in East Africa. The validation of modeled scaled depths with field-installed gauge data demonstrated the ability of the model to capture both the spatial patterns and seasonal variations. Modeled scaled estimates captured up to 60 % of the observed gauge variability with a mean root-mean-square error (RMSE) of 22 %. The data on relative water level, precipitation, and evapotranspiration (ETo) for water points in East and West Africa were modeled since 1998 and current information is being made available in near-real time. This chapter presents the approach, results from the East African study, and the first phase of expansion activities in the West Africa region. The water point monitoring network will be further expanded to cover much of sub-Saharan Africa. The goal of this study is to provide timely information on the water availability that would support already established FEWS NET activities in Africa. This chapter also presents the potential improvements in modeling approach to be implemented during future expansion in Africa.
Marine natural products for multi-targeted cancer treatment: A future insight.
Kumar, Maushmi S; Adki, Kaveri M
2018-05-30
Cancer is world's second largest alarming disease, which involves abnormal cell growth and have potential to spread to other parts of the body. Most of the available anticancer drugs are designed to act on specific targets by altering the activity of involved transporters and genes. As cancer cells exhibit complex cellular machinery, the regeneration of cancer tissues and chemo resistance towards the therapy has been the main obstacle in cancer treatment. This fact encourages the researchers to explore the multitargeted use of existing medicines to overcome the shortcomings of chemotherapy for alternative and safer treatment strategies. Recent developments in genomics-proteomics and an understanding of the molecular pharmacology of cancer have also challenged researchers to come up with target-based drugs. The literature supports the evidence of natural compounds exhibiting antioxidant, antimitotic, anti-inflammatory, antibiotic as well as anticancer activity. In this review, we have selected marine sponges as a prolific source of bioactive compounds which can be explored for their possible use in cancer and have tried to link their role in cancer pathway. To prove this, we revisited the literature for the selection of cancer genes for the multitargeted use of existing drugs and natural products. We used Cytoscape network analysis and Search tool for retrieval of interacting genes/ proteins (STRING) to study the possible interactions to show the links between the antioxidants, antibiotics, anti-inflammatory and antimitotic agents and their targets for their possible use in cancer. We included total 78 pathways, their genes and natural compounds from the above four pharmacological classes used in cancer treatment for multitargeted approach. Based on the Cytoscape network analysis results, we shortlist 22 genes based on their average shortest path length connecting one node to all other nodes in a network. These selected genes are CDKN2A, FH, VHL, STK11, SUFU, RB1, MEN1, HRPT2, EXT1, 2, CDK4, p14, p16, TSC1, 2, AXIN2, SDBH C, D, NF1, 2, BHD, PTCH, GPC3, CYLD and WT1. The selected genes were analysed using STRING for their protein-protein interactions. Based on the above findings, we propose the selected genes to be considered as major targets and are suggested to be studied for discovering marine natural products as drug lead in cancer treatment. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Integrated Dynamic Transit Operations (IDTO) concept of operations.
DOT National Transportation Integrated Search
2012-05-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Kim, Sunghun; Sterling, Bobbie Sue; Latimer, Lara
2010-01-01
Developing focused and relevant health promotion interventions is critical for behavioral change in a low-resource or special population. Evidence-based interventions, however, may not match the specific population or health concern of interest. This article describes the Multi-Source Method (MSM) which, in combination with a workshop format, may be used by health professionals and researchers in health promotion program development. The MSM draws on positive deviance practices and processes, focus groups, community advisors, behavioral change theory, and evidence-based strategies. Use of the MSM is illustrated in development of ethnic-specific weight loss interventions for low-income postpartum women. The MSM may be useful in designing future health programs designed for other special populations for whom existing interventions are unavailable or lack relevance. PMID:20433674
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
Multi-sources data fusion framework for remote triage prioritization in telehealth.
Salman, O H; Rasid, M F A; Saripan, M I; Subramaniam, S K
2014-09-01
The healthcare industry is streamlining processes to offer more timely and effective services to all patients. Computerized software algorithm and smart devices can streamline the relation between users and doctors by providing more services inside the healthcare telemonitoring systems. This paper proposes a multi-sources framework to support advanced healthcare applications. The proposed framework named Multi Sources Healthcare Architecture (MSHA) considers multi-sources: sensors (ECG, SpO2 and Blood Pressure) and text-based inputs from wireless and pervasive devices of Wireless Body Area Network. The proposed framework is used to improve the healthcare scalability efficiency by enhancing the remote triaging and remote prioritization processes for the patients. The proposed framework is also used to provide intelligent services over telemonitoring healthcare services systems by using data fusion method and prioritization technique. As telemonitoring system consists of three tiers (Sensors/ sources, Base station and Server), the simulation of the MSHA algorithm in the base station is demonstrated in this paper. The achievement of a high level of accuracy in the prioritization and triaging patients remotely, is set to be our main goal. Meanwhile, the role of multi sources data fusion in the telemonitoring healthcare services systems has been demonstrated. In addition to that, we discuss how the proposed framework can be applied in a healthcare telemonitoring scenario. Simulation results, for different symptoms relate to different emergency levels of heart chronic diseases, demonstrate the superiority of our algorithm compared with conventional algorithms in terms of classify and prioritize the patients remotely.
Gradient-Type Magnetoelectric Current Sensor with Strong Multisource Noise Suppression.
Zhang, Mingji; Or, Siu Wing
2018-02-14
A novel gradient-type magnetoelectric (ME) current sensor operating in magnetic field gradient (MFG) detection and conversion mode is developed based on a pair of ME composites that have a back-to-back capacitor configuration under a baseline separation and a magnetic biasing in an electrically-shielded and mechanically-enclosed housing. The physics behind the current sensing process is the product effect of the current-induced MFG effect associated with vortex magnetic fields of current-carrying cables (i.e., MFG detection) and the MFG-induced ME effect in the ME composite pair (i.e., MFG conversion). The sensor output voltage is directly obtained from the gradient ME voltage of the ME composite pair and is calibrated against cable current to give the current sensitivity. The current sensing performance of the sensor is evaluated, both theoretically and experimentally, under multisource noises of electric fields, magnetic fields, vibrations, and thermals. The sensor combines the merits of small nonlinearity in the current-induced MFG effect with those of high sensitivity and high common-mode noise rejection rate in the MFG-induced ME effect to achieve a high current sensitivity of 0.65-12.55 mV/A in the frequency range of 10 Hz-170 kHz, a small input-output nonlinearity of <500 ppm, a small thermal drift of <0.2%/℃ in the current range of 0-20 A, and a high common-mode noise rejection rate of 17-28 dB from multisource noises.
Gradient-Type Magnetoelectric Current Sensor with Strong Multisource Noise Suppression
2018-01-01
A novel gradient-type magnetoelectric (ME) current sensor operating in magnetic field gradient (MFG) detection and conversion mode is developed based on a pair of ME composites that have a back-to-back capacitor configuration under a baseline separation and a magnetic biasing in an electrically-shielded and mechanically-enclosed housing. The physics behind the current sensing process is the product effect of the current-induced MFG effect associated with vortex magnetic fields of current-carrying cables (i.e., MFG detection) and the MFG-induced ME effect in the ME composite pair (i.e., MFG conversion). The sensor output voltage is directly obtained from the gradient ME voltage of the ME composite pair and is calibrated against cable current to give the current sensitivity. The current sensing performance of the sensor is evaluated, both theoretically and experimentally, under multisource noises of electric fields, magnetic fields, vibrations, and thermals. The sensor combines the merits of small nonlinearity in the current-induced MFG effect with those of high sensitivity and high common-mode noise rejection rate in the MFG-induced ME effect to achieve a high current sensitivity of 0.65–12.55 mV/A in the frequency range of 10 Hz–170 kHz, a small input-output nonlinearity of <500 ppm, a small thermal drift of <0.2%/℃ in the current range of 0–20 A, and a high common-mode noise rejection rate of 17–28 dB from multisource noises. PMID:29443920
The assessment of pathologists/laboratory medicine physicians through a multisource feedback tool.
Lockyer, Jocelyn M; Violato, Claudio; Fidler, Herta; Alakija, Pauline
2009-08-01
There is increasing interest in ensuring that physicians demonstrate the full range of Accreditation Council for Graduate Medical Education competencies. To determine whether it is possible to develop a feasible and reliable multisource feedback instrument for pathologists and laboratory medicine physicians. Surveys with 39, 30, and 22 items were developed to assess individual physicians by 8 peers, 8 referring physicians, and 8 coworkers (eg, technologists, secretaries), respectively, using 5-point scales and an unable-to-assess category. Physicians completed a self-assessment survey. Items addressed key competencies related to clinical competence, collaboration, professionalism, and communication. Data from 101 pathologists and laboratory medicine physicians were analyzed. The mean number of respondents per physician was 7.6, 7.4, and 7.6 for peers, referring physicians, and coworkers, respectively. The reliability of the internal consistency, measured by Cronbach alpha, was > or = .95 for the full scale of all instruments. Analysis indicated that the medical peer, referring physician, and coworker instruments achieved a generalizability coefficient of .78, .81, and .81, respectively. Factor analysis showed 4 factors on the peer questionnaire accounted for 68.8% of the total variance: reports and clinical competency, collaboration, educational leadership, and professional behavior. For the referring physician survey, 3 factors accounted for 66.9% of the variance: professionalism, reports, and clinical competency. Two factors on the coworker questionnaire accounted for 59.9% of the total variance: communication and professionalism. It is feasible to assess this group of physicians using multisource feedback with instruments that are reliable.
Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2013-01-01
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2012-01-01
Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results. PMID:22498655
On the Discovery of Evolving Truth
Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei
2015-01-01
In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods. PMID:26705502
Evidence Combination From an Evolutionary Game Theory Perspective.
Deng, Xinyang; Han, Deqiang; Dezert, Jean; Deng, Yong; Shyr, Yu
2016-09-01
Dempster-Shafer evidence theory is a primary methodology for multisource information fusion because it is good at dealing with uncertain information. This theory provides a Dempster's rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multievidence system. Within the proposed ECR, we develop a Jaccard matrix game to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution's stability and convergence, have been mathematically proved as well.
Traube, Dorian E.; Holloway, Ian W.; Schrager, Sheree M.; Kipke, Michele D.
2011-01-01
Background Young men who have sex with men (YMSM) continue to be at elevated risk for substance use; however, models explaining this phenomenon have often focused on a limited array of explanatory constructs. Purpose This study utilizes Social Action Theory (SAT) as a framework to address gaps in research by documenting the social, behavioral, and demographic risk factors associated with illicit drug use among YMSM. Methods Structural equation modeling was used to apply SAT to a cross-sectional sample of 526 men from the Healthy Young Men Study, a longitudinal study of substance use and sexual risk behavior among YMSM in Los Angeles. Results The final model possessed very good fit statistics (CFI = 0.936, TLI = 0.925, RMSEA = 0.040) indicating that SAT is appropriate for use with YMSM. Conclusions Substance use interventions for YMSM could be enhanced by employing SAT as conceptualized in this study and using a multi-targeted strategy for impacting illicit drug use. PMID:21644802
Gao, Lin; Li, Chang-chun; Wang, Bao-shan; Yang Gui-jun; Wang, Lei; Fu, Kui
2016-01-01
With the innovation of remote sensing technology, remote sensing data sources are more and more abundant. The main aim of this study was to analyze retrieval accuracy of soybean leaf area index (LAI) based on multi-source remote sensing data including ground hyperspectral, unmanned aerial vehicle (UAV) multispectral and the Gaofen-1 (GF-1) WFV data. Ratio vegetation index (RVI), normalized difference vegetation index (NDVI), soil-adjusted vegetation index (SAVI), difference vegetation index (DVI), and triangle vegetation index (TVI) were used to establish LAI retrieval models, respectively. The models with the highest calibration accuracy were used in the validation. The capability of these three kinds of remote sensing data for LAI retrieval was assessed according to the estimation accuracy of models. The experimental results showed that the models based on the ground hyperspectral and UAV multispectral data got better estimation accuracy (R² was more than 0.69 and RMSE was less than 0.4 at 0.01 significance level), compared with the model based on WFV data. The RVI logarithmic model based on ground hyperspectral data was little superior to the NDVI linear model based on UAV multispectral data (The difference in E(A), R² and RMSE were 0.3%, 0.04 and 0.006, respectively). The models based on WFV data got the lowest estimation accuracy with R2 less than 0.30 and RMSE more than 0.70. The effects of sensor spectral response characteristics, sensor geometric location and spatial resolution on the soybean LAI retrieval were discussed. The results demonstrated that ground hyperspectral data were advantageous but not prominent over traditional multispectral data in soybean LAI retrieval. WFV imagery with 16 m spatial resolution could not meet the requirements of crop growth monitoring at field scale. Under the condition of ensuring the high precision in retrieving soybean LAI and working efficiently, the approach to acquiring agricultural information by UAV remote sensing could yet be regarded as an optimal plan. Therefore, in the case of more and more available remote sensing information sources, agricultural UAV remote sensing could become an important information resource for guiding field-scale crop management and provide more scientific and accurate information for precision agriculture research.
A perturbation method to the tent map based on Lyapunov exponent and its application
NASA Astrophysics Data System (ADS)
Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu
2015-10-01
Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).
NASA Astrophysics Data System (ADS)
Crutchfield, J.
2016-12-01
The presentation will discuss the current status of the International Production Assessment Division of the USDA ForeignAgricultural Service for operational monitoring and forecasting of current crop conditions, and anticipated productionchanges to produce monthly, multi-source consensus reports on global crop conditions including the use of Earthobservations (EO) from satellite and in situ sources.United States Department of Agriculture (USDA) Foreign Agricultural Service (FAS) International Production AssessmentDivision (IPAD) deals exclusively with global crop production forecasting and agricultural analysis in support of the USDAWorld Agricultural Outlook Board (WAOB) lockup process and contributions to the World Agricultural Supply DemandEstimates (WASE) report. Analysts are responsible for discrete regions or countries and conduct in-depth long-termresearch into national agricultural statistics, farming systems, climatic, environmental, and economic factors affectingcrop production. IPAD analysts become highly valued cross-commodity specialists over time, and are routinely soughtout for specialized analyses to support governmental studies. IPAD is responsible for grain, oilseed, and cotton analysison a global basis. IPAD is unique in the tools it uses to analyze crop conditions around the world, including customweather analysis software and databases, satellite imagery and value-added image interpretation products. It alsoincorporates all traditional agricultural intelligence resources into its forecasting program, to make the fullest use ofavailable information in its operational commodity forecasts and analysis. International travel and training play animportant role in learning about foreign agricultural production systems and in developing analyst knowledge andcapabilities.
Murphy, Douglas J; Bruce, David A; Mercer, Stewart W; Eva, Kevin W
2009-05-01
To investigate the reliability and feasibility of six potential workplace-based assessment methods in general practice training: criterion audit, multi-source feedback from clinical and non-clinical colleagues, patient feedback (the CARE Measure), referral letters, significant event analysis, and video analysis of consultations. Performance of GP registrars (trainees) was evaluated with each tool to assess the reliabilities of the tools and feasibility, given raters and number of assessments needed. Participant experience of process determined by questionnaire. 171 GP registrars and their trainers, drawn from nine deaneries (representing all four countries in the UK), participated. The ability of each tool to differentiate between doctors (reliability) was assessed using generalisability theory. Decision studies were then conducted to determine the number of observations required to achieve an acceptably high reliability for "high-stakes assessment" using each instrument. Finally, descriptive statistics were used to summarise participants' ratings of their experience using these tools. Multi-source feedback from colleagues and patient feedback on consultations emerged as the two methods most likely to offer a reliable and feasible opinion of workplace performance. Reliability co-efficients of 0.8 were attainable with 41 CARE Measure patient questionnaires and six clinical and/or five non-clinical colleagues per doctor when assessed on two occasions. For the other four methods tested, 10 or more assessors were required per doctor in order to achieve a reliable assessment, making the feasibility of their use in high-stakes assessment extremely low. Participant feedback did not raise any major concerns regarding the acceptability, feasibility, or educational impact of the tools. The combination of patient and colleague views of doctors' performance, coupled with reliable competence measures, may offer a suitable evidence-base on which to monitor progress and completion of doctors' training in general practice.
Surveillance for work-related skull fractures in Michigan.
Kica, Joanna; Rosenman, Kenneth D
2014-12-01
The objective was to develop a multisource surveillance system for work-related skull fractures. Records on work-related skull fractures were obtained from Michigan's 134 hospitals, Michigan's Workers' Compensation Agency and death certificates. Cases from the three sources were matched to eliminate duplicates from more than one source. Workplaces where the most severe injuries occurred were referred to OSHA for an enforcement inspection. There were 318 work related skull fractures, not including facial fractures, between 2010 and 2012. In 2012, after the inclusion of facial fractures, 316 fractures were identified of which 218 (69%) were facial fractures. The Bureau of Labor Statistic's (BLS) 2012 estimate of skull fractures in Michigan, which includes facial fractures, was 170, which was 53.8% of those identified from our review of medical records. The inclusion of facial fractures in the surveillance system increased the percentage of women identified from 15.4% to 31.2%, decreased severity (hospitalization went from 48.7% to 10.6% and loss of consciousness went from 56.5% to 17.8%), decreased falls from 48.2% to 27.6%, and increased assaults from 5.0% to 20.2%, shifted the most common industry from construction (13.3%) to health care and social assistance (15.0%) and the highest incidence rate from males 65+ (6.8 per 100,000) to young men, 20-24 years (9.6 per 100,000). Workplace inspections resulted in 45 violations and $62,750 in penalties. The Michigan multisource surveillance system of workplace injuries had two major advantages over the existing national system: (a) workplace investigations were initiated hazards identified and safety changes implemented at the facilities where the injuries occurred; and (b) a more accurate count was derived, with 86% more work-related skull fractures identified than BLS's employer based estimate. A more comprehensive system to identify and target interventions for workplace injuries was implemented using hospital and emergency department medical records. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
Fang, Ji-Tseng; Ko, Yu-Shien; Chien, Chu-Chun; Yu, Kuang-Hui
2013-01-01
Since 1994, Taiwanese medical universities have employed the multiple application method comprising "recommendations and screening" and "admission application." The purpose of this study is to examine whether medical students admitted using different admission programs gave different performances. To evaluate the six core competencies for medical students proposed by Accreditation Council for Graduate Medical Education (ACGME), this study employed various assessment tools, including student opinion feedback, multi-source feedback (MSF), course grades, and examination results.MSF contains self-assessment scale, peer assessment scale, nursing staff assessment scale, visiting staff assessment scale, and chief resident assessment scale. In the subscales, the CronbachÊs alpha were higher than 0.90, indicating good reliability. Research participants consisted of 182 students from the School of Medicine at Chang Gung University. Regarding studentsÊ average grade for the medical ethics course, the performance of students who were enrolled through school recommendations exceeded that of students who were enrolled through the National College University Entrance Examination (NCUEE) p = 0.011), and all considered "teamwork" as the most important. Different entry pipelines of students in the "communication," "work attitude," "medical knowledge," and "teamwork" assessment scales showed no significant difference. The improvement rate of the students who were enrolled through the school recommendations was better than that of the students who were enrolled through the N CUEE in the "professional skills," "medical core competencies," "communication," and "teamwork" projects of self-assessment and peer assessment scales. However, the students who were enrolled through the NCUEE were better in the "professional skills," "medical core competencies," "communication," and "teamwork" projects of the visiting staff assessment scale and the chief resident assessment scale. Collectively, the performance of the students enrolled through recommendations was slightly better than that of the students enrolled through the NCUEE, although statistical significance was found in certain parts of the grades only.
Test readiness assessment summary for Integrated Dynamic Transit Operations (IDTO).
DOT National Transportation Integrated Search
2012-10-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Potential effects of sulfur pollutants on grape production in New York State
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudson, D.A.; Viessman, S.
1983-01-01
This paper presents the results of a prototype analysis of sulfur pollutants on graph production in New York State. Principal grape production areas for the state are defined and predictions of sulfur dioxide concentrations associated with present and projected sources are computed. Sulfur dioxide concentrations are based on the results of a multi-source dispersion model, whereas concentrations for other pollutants are derived from observations. This information is used in conjunction with results from experiments conducted to identify threshold levels of damage and/or injury to a variety of grape species to pollutants. Determination is then made whether the subject crop ismore » at risk from present and projected concentrations of pollutants.« less
NASA Technical Reports Server (NTRS)
Imhoff, M. L.; Tucker, C. J.; Lawrence, W. T.; Stutzer, D.; Rusin, Robert
2000-01-01
Data from two different satellites, a digital land cover map, and digital census data were analyzed and combined in a geographic information system to study the effect of urbanization on photosynthetic vegetation productivity in the United States. Results show that urbanization can have a measurable but variable impact on the primary productivity of the land surface. Annual productivity can be reduced by as much as 20 days in some areas, but in resource limited regions, photosynthetic production can be enhanced by human activity. Overall, urban development reduces the productivity of the land surface and those areas with the highest productivity are directly in the path of urban sprawl.
National Center for Multisource Information Fusion
2009-04-01
discipline. The center has focused its efforts in solving the growing problems of exploiting massive quantities of diverse, and often...development of a comprehensive high level fusion framework that includes the addition of Levels 2, 3 and 4 type tools to the ECCARS...correlate IDS alerts into individual attacks and provide a threat assessment for the network. A comprehensive review of attack graphs was conducted
Systems biology impact on antiepileptic drug discovery.
Margineanu, Doru Georg
2012-02-01
Systems biology (SB), a recent trend in bioscience research to consider the complex interactions in biological systems from a holistic perspective, sees the disease as a disturbed network of interactions, rather than alteration of single molecular component(s). SB-relying network pharmacology replaces the prevailing focus on specific drug-receptor interaction and the corollary of rational drug design of "magic bullets", by the search for multi-target drugs that would act on biological networks as "magic shotguns". Epilepsy being a multi-factorial, polygenic and dynamic pathology, SB approach appears particularly fit and promising for antiepileptic drug (AED) discovery. In fact, long before the advent of SB, AED discovery already involved some SB-like elements. A reported SB project aimed to find out new drug targets in epilepsy relies on a relational database that integrates clinical information, recordings from deep electrodes and 3D-brain imagery with histology and molecular biology data on modified expression of specific genes in the brain regions displaying spontaneous epileptic activity. Since hitting a single target does not treat complex diseases, a proper pharmacological promiscuity might impart on an AED the merit of being multi-potent. However, multi-target drug discovery entails the complicated task of optimizing multiple activities of compounds, while having to balance drug-like properties and to control unwanted effects. Specific design tools for this new approach in drug discovery barely emerge, but computational methods making reliable in silico predictions of poly-pharmacology did appear, and their progress might be quite rapid. The current move away from reductionism into network pharmacology allows expecting that a proper integration of the intrinsic complexity of epileptic pathology in AED discovery might result in literally anti-epileptic drugs. Copyright © 2011 Elsevier B.V. All rights reserved.
Are multisource levothyroxine sodium tablets marketed in Egypt interchangeable?
Abou-Taleb, Basant A; Bondok, Maha; Nounou, Mohamed Ismail; Khalafallah, Nawal; Khalil, Saleh
2018-02-01
A clinical study was initiated in response to patients' complaints, supported by the treating physicians, of suspected differences in efficacy among multisource levothyroxine sodium tablets marketed in Egypt. The study design was a multiple dose (100μg levothyroxine sodium tablet once daily for 6 months) and involved 50 primary hypothyroidism female patients (5 equal groups). Tablets administered included five tablet batches (two brands, three origin locations) purchased from local pharmacies in Alexandria. Assessment parameters (measured on consecutive visits) included the thyroid stimulating hormone, total and free levothyroxine. Tablet dissolution rate was determined (BP/EP 2014 & USP 2014). In vitro vs in vivovs correlations were developed. Clinical and pharmaceutical data confirmed inter-brand and inter-source differences in efficacy. Correlations examined indicated potential usefulness of in vitro dissolution test in detecting poor performing levothyroxine sodium tablets during shelf life. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yonggang
In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less
Multisource drug policies in Latin America: survey of 10 countries.
Homedes, Núria; Ugalde, Antonio
2005-01-01
Essential drug lists and generic drug policies have been promoted as strategies to improve access to pharmaceuticals and control their rapidly escalating costs. This article reports the results of a preliminary survey conducted in 10 Latin American countries. The study aimed to document the experiences of different countries in defining and implementing generic drug policies, determine the cost of registering different types of pharmaceutical products and the time needed to register them, and uncover the incentives governments have developed to promote the use of multisource drugs. The survey instrument was administered in person in Chile, Ecuador and Peru and by email in Argentina, Brazil, Bolivia, Colombia, Costa Rica, Nicaragua and Uruguay. There was a total of 22 respondents. Survey responses indicated that countries use the terms generic and bioequivalence differently. We suggest there is a need to harmonize definitions and technical concepts. PMID:15682251
Effective Coping With Supervisor Conflict Depends on Control: Implications for Work Strains.
Eatough, Erin M; Chang, Chu-Hsiang
2018-01-11
This study examined the interactive effects of interpersonal conflict at work, coping strategy, and perceived control specific to the conflict on employee work strain using multisource and time-lagged data across two samples. In Sample 1, multisource data was collected from 438 employees as well as data from participant-identified secondary sources (e.g., significant others, best friends). In Sample 2, time-lagged data from 100 full-time employees was collected in a constructive replication. Overall, findings suggested that the success of coping efforts as indicated by lower strains hinges on the combination of the severity of the stressor, perceived control over the stressor, and coping strategy used (problem-focused vs. emotion-focused coping). Results from the current study provide insights for why previous efforts to document the moderating effects of coping have been inconsistent, especially with regards to emotion-focused coping. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Yongzhi, WANG; hui, WANG; Lixia, LIAO; Dongsen, LI
2017-02-01
In order to analyse the geological characteristics of salt rock and stability of salt caverns, rough three-dimensional (3D) models of salt rock stratum and the 3D models of salt caverns on study areas are built by 3D GIS spatial modeling technique. During implementing, multi-source data, such as basic geographic data, DEM, geological plane map, geological section map, engineering geological data, and sonar data are used. In this study, the 3D spatial analyzing and calculation methods, such as 3D GIS intersection detection method in three-dimensional space, Boolean operations between three-dimensional space entities, three-dimensional space grid discretization, are used to build 3D models on wall rock of salt caverns. Our methods can provide effective calculation models for numerical simulation and analysis of the creep characteristics of wall rock in salt caverns.
NASA Astrophysics Data System (ADS)
López de Ipiña, JM; Vaquero, C.; Gutierrez-Cañas, C.
2017-06-01
It is expected a progressive increase of the industrial processes that manufacture of intermediate (iNEPs) and end products incorporating ENMs (eNEPs) to bring about improved properties. Therefore, the assessment of occupational exposure to airborne NOAA will migrate, from the simple and well-controlled exposure scenarios in research laboratories and ENMs production plants using innovative production technologies, to much more complex exposure scenarios located around processes of manufacture of eNEPs that, in many cases, will be modified conventional production processes. Here will be discussed some of the typical challenging situations in the process of risk assessment of inhalation exposure to NOAA in Multi-Source Industrial Scenarios (MSIS), from the basis of the lessons learned when confronted to those scenarios in the frame of some European and Spanish research projects.
Distributed cluster management techniques for unattended ground sensor networks
NASA Astrophysics Data System (ADS)
Essawy, Magdi A.; Stelzig, Chad A.; Bevington, James E.; Minor, Sharon
2005-05-01
Smart Sensor Networks are becoming important target detection and tracking tools. The challenging problems in such networks include the sensor fusion, data management and communication schemes. This work discusses techniques used to distribute sensor management and multi-target tracking responsibilities across an ad hoc, self-healing cluster of sensor nodes. Although miniaturized computing resources possess the ability to host complex tracking and data fusion algorithms, there still exist inherent bandwidth constraints on the RF channel. Therefore, special attention is placed on the reduction of node-to-node communications within the cluster by minimizing unsolicited messaging, and distributing the sensor fusion and tracking tasks onto local portions of the network. Several challenging problems are addressed in this work including track initialization and conflict resolution, track ownership handling, and communication control optimization. Emphasis is also placed on increasing the overall robustness of the sensor cluster through independent decision capabilities on all sensor nodes. Track initiation is performed using collaborative sensing within a neighborhood of sensor nodes, allowing each node to independently determine if initial track ownership should be assumed. This autonomous track initiation prevents the formation of duplicate tracks while eliminating the need for a central "management" node to assign tracking responsibilities. Track update is performed as an ownership node requests sensor reports from neighboring nodes based on track error covariance and the neighboring nodes geo-positional location. Track ownership is periodically recomputed using propagated track states to determine which sensing node provides the desired coverage characteristics. High fidelity multi-target simulation results are presented, indicating the distribution of sensor management and tracking capabilities to not only reduce communication bandwidth consumption, but to also simplify multi-target tracking within the cluster.
Jeong, Eun Sook; Cha, Eunju; Cha, Sangwon; Kim, Sunghwan; Oh, Han Bin; Kwon, Oh-Seung; Lee, Jaeick
2017-11-21
In this study, a hydrogen/deuterium (H/D) exchange method using gas chromatography-electrospray ionization/mass spectrometry (GC-ESI/MS) was first investigated as a novel tool for online H/D exchange of multitarget analytes. The GC and ESI source were combined with a homemade heated column transfer line. GC-ESI/MS-based H/D exchange occurs in an atmospheric pressure ion source as a result of reacting the gas-phase analyte eluted from GC with charged droplets of deuterium oxide infused as the ESI spray solvent. The consumption of the deuterated solvent at a flow rate of 2 μL min -1 was more economical than that in online H/D exchange methods reported to date. In-ESI-source H/D exchange by GC-ESI/MS was applied to 11 stimulants with secondary amino or hydroxyl groups. After H/D exchange, the spectra of the stimulants showed unexchanged, partially exchanged, and fully exchanged ions showing various degrees of exchange. The relative abundances corrected for naturally occurring isotopes of the fully exchanged ions of stimulants, except for etamivan, were in the range 24.3-85.5%. Methylephedrine and cyclazodone showed low H/D exchange efficiency under acidic, neutral, and basic spray solvent conditions and nonexchange for etamivan with an acidic phenolic OH group. The in-ESI-source H/D exchange efficiency by GC-ESI/MS was sufficient to determine the number of hydrogen by elucidation of fragmentation from the spectrum. Therefore, this online H/D exchange technique using GC-ESI/MS has potential as an alternative method for simultaneous H/D exchange of multitarget analytes.
Jameel, Ehtesham; Meena, Poonam; Maqbool, Mudasir; Kumar, Jitendra; Ahmed, Waqar; Mumtazuddin, Syed; Tiwari, Manisha; Hoda, Nasimul; Jayaram, B
2017-08-18
In our endeavor towards the development of potent multitarget ligands for the treatment of Alzheimer's disease, a series of triazine-triazolopyrimidine hybrids were designed, synthesized and characterized by various spectral techniques. Docking and scoring techniques were used to design the inhibitors and to display their interaction with key residues of active site. Organic synthesis relied upon convergent synthetic routes were mono and di-substituted triazines were connected with triazolopyrimidine using piperazine as a linker. In total, seventeen compounds were synthesized in which the di-substituted triazine-triazolopyrimidine derivatives 9a-d showed better acetylcholinesterase (AChE) inhibitory activity than the corresponding tri-substituted triazine-triazolopyrimidine derivatives 10a-f. Out of the disubstituted triazine-triazolopyrimidine based compounds, 9a and 9b showed encouraging inhibitory activity on AChE with IC 50 values 0.065 and 0.092 μM, respectively. Interestingly, 9a and 9b also demonstrated good inhibition selectivity towards AChE over BuChE by ∼28 folds. Furthermore, kinetic analysis and molecular modeling studies showed that 9a and 9b target both catalytic active site as well as peripheral anionic site of AChE. In addition, these derivatives effectively modulated Aβ self-aggregation as investigated through CD spectroscopy, ThT fluorescence assay and electron microscopy. Besides, these compounds exhibited potential antioxidants (2.15 and 2.91 trolox equivalent by ORAC assay) and metal chelating properties. In silico ADMET profiling highlighted that, these novel triazine derivatives have appropriate drug like properties and possess very low toxic effects in the primarily pharmacokinetic study. Overall, the multitarget profile exerted by these novel triazine molecules qualified them as potential anti-Alzheimer drug candidates in AD therapy. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Neuropharmacology beyond reductionism - A likely prospect.
Margineanu, Doru Georg
2016-03-01
Neuropharmacology had several major past successes, but the last few decades did not witness any leap forward in the drug treatment of brain disorders. Moreover, current drugs used in neurology and psychiatry alleviate the symptoms, while hardly curing any cause of disease, basically because the etiology of most neuro-psychic syndromes is but poorly known. This review argues that this largely derives from the unbalanced prevalence in neuroscience of the analytic reductionist approach, focused on the cellular and molecular level, while the understanding of integrated brain activities remains flimsier. The decline of drug discovery output in the last decades, quite obvious in neuropharmacology, coincided with the advent of the single target-focused search of potent ligands selective for a well-defined protein, deemed critical in a given pathology. However, all the widespread neuro-psychic troubles are multi-mechanistic and polygenic, their complex etiology making unsuited the single-target drug discovery. An evolving approach, based on systems biology considers that a disease expresses a disturbance of the network of interactions underlying organismic functions, rather than alteration of single molecular components. Accordingly, systems pharmacology seeks to restore a disturbed network via multi-targeted drugs. This review notices that neuropharmacology in fact relies on drugs which are multi-target, this feature having occurred just because those drugs were selected by phenotypic screening in vivo, or emerged from serendipitous clinical observations. The novel systems pharmacology aims, however, to devise ab initio multi-target drugs that will appropriately act on multiple molecular entities. Though this is a task much more complex than the single-target strategy, major informatics resources and computational tools for the systemic approach of drug discovery are already set forth and their rapid progress forecasts promising outcomes for neuropharmacology. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Melson, Ambrose J; Monk, Rebecca Louise; Heim, Derek
2016-12-01
Data-driven student drinking norms interventions are based on reported normative overestimation of the extent and approval of an average student's drinking. Self-reported differences between personal and perceived normative drinking behaviors and attitudes are taken at face value as evidence of actual levels of overestimation. This study investigates whether commonly used data collection methods and socially desirable responding (SDR) may inadvertently impede establishing "objective" drinking norms. U.K. students (N = 421; 69% female; mean age 20.22 years [SD = 2.5]) were randomly assigned to 1 of 3 versions of a drinking norms questionnaire: The standard multi-target questionnaire assessed respondents' drinking attitudes and behaviors (frequency of consumption, heavy drinking, units on a typical occasion) as well as drinking attitudes and behaviors for an "average student." Two deconstructed versions of this questionnaire assessed identical behaviors and attitudes for participants themselves or an "average student." The Balanced Inventory of Desirable Responding was also administered. Students who answered questions about themselves and peers reported more extreme perceived drinking attitudes for the average student compared with those reporting solely on the "average student." Personal and perceived reports of drinking behaviors did not differ between multitarget and single-target versions of the questionnaire. Among those who completed the multitarget questionnaire, after controlling for demographics and weekly drinking, SDR was related positively with the magnitude of difference between students' own reported behaviors/attitudes and those perceived for the average student. Standard methodological practices and socially desirable responding may be sources of bias in peer norm overestimation research. Copyright © 2016 by the Research Society on Alcoholism.
Chakraborty, Sandipan; Bandyopadhyay, Jaya; Chakraborty, Sourav; Basu, Soumalee
2016-10-04
Alzheimer's disease (AD) is the most frequent form of neurodegenerative disorder in elderly people. Involvement of several pathogenic events and their interconnections make this disease a complex disorder. Therefore, designing compounds that can inhibit multiple toxic pathways is the most attractive therapeutic strategy in complex disorders like AD. Here, we have designed a multi-tier screening protocol combining ensemble docking to mine BACE1 inhibitor, as well as 2-D QSAR models for anti-amyloidogenic and antioxidant activities. An in house developed phytochemical library of 200 phytochemicals has been screened through this multi-target procedure which mine hesperidin, a flavanone glycoside commonly found in citrus food items, as a multi-potent phytochemical in AD therapeutics. Steady-state and time-resolved fluorescence spectroscopy reveal that binding of hesperidin to the active site of BACE1 induces a conformational transition of the protein from open to closed form. Hesperidin docks close to the catalytic aspartate residues and orients itself in a way that blocks the cavity opening thereby precluding substrate binding. Hesperidin is a high affinity BACE1 inhibitor and only 500 nM of the compound shows complete inhibition of the enzyme activity. Furthermore, ANS and Thioflavin-T binding assay show that hesperidin completely inhibits the amyloid fibril formation which is further supported by atomic force microscopy. Hesperidin exhibits moderate ABTS(+) radical scavenging assay but strong hydroxyl radical scavenging ability, as evident from DNA nicking assay. Present study demonstrates the applicability of a novel multi-target screening procedure to mine multi-potent agents from natural origin for AD therapeutics. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Anton, S. R.; Taylor, S. G.; Raby, E. Y.; Farinholt, K. M.
2013-03-01
With a global interest in the development of clean, renewable energy, wind energy has seen steady growth over the past several years. Advances in wind turbine technology bring larger, more complex turbines and wind farms. An important issue in the development of these complex systems is the ability to monitor the state of each turbine in an effort to improve the efficiency and power generation. Wireless sensor nodes can be used to interrogate the current state and health of wind turbine structures; however, a drawback of most current wireless sensor technology is their reliance on batteries for power. Energy harvesting solutions present the ability to create autonomous power sources for small, low-power electronics through the scavenging of ambient energy; however, most conventional energy harvesting systems employ a single mode of energy conversion, and thus are highly susceptible to variations in the ambient energy. In this work, a multi-source energy harvesting system is developed to power embedded electronics for wind turbine applications in which energy can be scavenged simultaneously from several ambient energy sources. Field testing is performed on a full-size, residential scale wind turbine where both vibration and solar energy harvesting systems are utilized to power wireless sensing systems. Two wireless sensors are investigated, including the wireless impedance device (WID) sensor node, developed at Los Alamos National Laboratory (LANL), and an ultra-low power RF system-on-chip board that is the basis for an embedded wireless accelerometer node currently under development at LANL. Results indicate the ability of the multi-source harvester to successfully power both sensors.
van der Meulen, Mirja W; Boerebach, Benjamin C M; Smirnova, Alina; Heeneman, Sylvia; Oude Egbrink, Mirjam G A; van der Vleuten, Cees P M; Arah, Onyebuchi A; Lombarts, Kiki M J M H
2017-01-01
Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
Intelligence-aided multitarget tracking for urban operations - a case study: counter terrorism
NASA Astrophysics Data System (ADS)
Sathyan, T.; Bharadwaj, K.; Sinha, A.; Kirubarajan, T.
2006-05-01
In this paper, we present a framework for tracking multiple mobile targets in an urban environment based on data from multiple sources of information, and for evaluating the threat these targets pose to assets of interest (AOI). The motivating scenario is one where we have to track many targets, each with different (unknown) destinations and/or intents. The tracking algorithm is aided by information about the urban environment (e.g., road maps, buildings, hideouts), and strategic and intelligence data. The tracking algorithm needs to be dynamic in that it has to handle a time-varying number of targets and the ever-changing urban environment depending on the locations of the moving objects and AOI. Our solution uses the variable structure interacting multiple model (VS-IMM) estimator, which has been shown to be effective in tracking targets based on road map information. Intelligence information is represented as target class information and incorporated through a combined likelihood calculation within the VS-IMM estimator. In addition, we develop a model to calculate the probability that a particular target can attack a given AOI. This model for the calculation of the probability of attack is based on the target kinematic and class information. Simulation results are presented to demonstrate the operation of the proposed framework on a representative scenario.
A systematic study of chemogenomics of carbohydrates.
Gu, Jiangyong; Luo, Fang; Chen, Lirong; Yuan, Gu; Xu, Xiaojie
2014-03-04
Chemogenomics focuses on the interactions between biologically active molecules and protein targets for drug discovery. Carbohydrates are the most abundant compounds in natural products. Compared with other drugs, the carbohydrate drugs show weaker side effects. Searching for multi-target carbohydrate drugs can be regarded as a solution to improve therapeutic efficacy and safety. In this work, we collected 60 344 carbohydrates from the Universal Natural Products Database (UNPD) and explored the chemical space of carbohydrates by principal component analysis. We found that there is a large quantity of potential lead compounds among carbohydrates. Then we explored the potential of carbohydrates in drug discovery by using a network-based multi-target computational approach. All carbohydrates were docked to 2389 target proteins. The most potential carbohydrates for drug discovery and their indications were predicted based on a docking score-weighted prediction model. We also explored the interactions between carbohydrates and target proteins to find the pathological networks, potential drug candidates and new indications.
NASA Astrophysics Data System (ADS)
Duan, Xiaopin; Xiao, Jisheng; Yin, Qi; Zhang, Zhiwen; Yu, Haijun; Mao, Shirui; Li, Yaping
2014-03-01
Metastasis, the main cause of cancer related deaths, remains the greatest challenge in cancer treatment. Disulfiram (DSF), which has multi-targeted anti-tumor activity, was encapsulated into redox-sensitive shell crosslinked micelles to achieve intracellular targeted delivery and finally inhibit tumor growth and metastasis. The crosslinked micelles demonstrated good stability in circulation and specifically released DSF under a reductive environment that mimicked the intracellular conditions of tumor cells. As a result, the DSF-loaded redox-sensitive shell crosslinked micelles (DCMs) dramatically inhibited cell proliferation, induced cell apoptosis and suppressed cell invasion, as well as impairing tube formation of HMEC-1 cells. In addition, the DCMs could accumulate in tumor tissue and stay there for a long time, thereby causing significant inhibition of 4T1 tumor growth and marked prevention in lung metastasis of 4T1 tumors. These results suggested that DCMs could be a promising delivery system in inhibiting the growth and metastasis of breast cancer.
Selenoureido-iminosugars: A new family of multitarget drugs.
Olsen, Jacob Ingemar; Plata, Gabriela B; Padrón, José M; López, Óscar; Bols, Mikael; Fernández-Bolaños, José G
2016-11-10
Herein we report the synthesis of N-alkylated deoxynojirimycin derivatives decorated with a selenoureido motif at the hydrocarbon tether as an example of unprecedented multitarget agents. Title compounds were designed as dual drugs for tackling simultaneously the Gaucher disease (by selective inhibition of β-glucosidase, Ki = 1.6-5.5 μM, with improved potency and selectivity compared to deoxynojirimycin) and its neurological complications (by inhibiting AChE, Ki up to 5.8 μM). Moreover, an excellent mimicry of the selenoenzyme glutathione peroxidase was also found for the catalytic scavenging of H2O2 (Kcat/Kuncat up to 640) using PhSH as a cofactor, with improved activity compared to known positive controls, like (PhSe)2 and ebselen; therefore, such compounds are also excellent scavengers of peroxides, an example of reactive oxygen species present at high concentrations in patients of Gaucher disease and neurological disorders. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Multifunctional Cinnamic Acid Derivatives.
Peperidou, Aikaterini; Pontiki, Eleni; Hadjipavlou-Litina, Dimitra; Voulgari, Efstathia; Avgoustakis, Konstantinos
2017-07-25
Our research to discover potential new multitarget agents led to the synthesis of 10 novel derivatives of cinnamic acids and propranolol, atenolol, 1-adamantanol, naphth-1-ol, and (benzylamino) ethan-1-ol. The synthesized molecules were evaluated as trypsin, lipoxygenase and lipid peroxidation inhibitors and for their cytotoxicity. Compound 2b derived from phenoxyphenyl cinnamic acid and propranolol showed the highest lipoxygenase (LOX) inhibition (IC 50 = 6 μΜ) and antiproteolytic activity (IC 50 = 0.425 μΜ). The conjugate 1a of simple cinnamic acid with propranolol showed the higher antiproteolytic activity (IC 50 = 0.315 μΜ) and good LOX inhibitory activity (IC 50 = 66 μΜ). Compounds 3a and 3b , derived from methoxylated caffeic acid present a promising combination of in vitro inhibitory and antioxidative activities. The S isomer of 2b also presented an interesting multitarget biological profile in vitro . Molecular docking studies point to the fact that the theoretical results for LOX-inhibitor binding are identical to those from preliminary in vitro study.
Simoni, Elena; Daniele, Simona; Bottegoni, Giovanni; Pizzirani, Daniela; Trincavelli, Maria L; Goldoni, Luca; Tarozzo, Glauco; Reggiani, Angelo; Martini, Claudia; Piomelli, Daniele; Melchiorre, Carlo; Rosini, Michela; Cavalli, Andrea
2012-11-26
Herein we report on a novel series of multitargeted compounds obtained by linking together galantamine and memantine. The compounds were designed by taking advantage of the crystal structures of acetylcholinesterase (AChE) in complex with galantamine derivatives. Sixteen novel derivatives were synthesized, using spacers of different lengths and chemical composition. The molecules were then tested as inhibitors of AChE and as binders of the N-methyl-d-aspartate (NMDA) receptor (NMDAR). Some of the new compounds were nanomolar inhibitors of AChE and showed micromolar affinities for NMDAR. All compounds were also tested for selectivity toward NMDAR containing the 2B subunit (NR2B). Some of the new derivatives showed a micromolar affinity for NR2B. Finally, selected compounds were tested using a cell-based assay to measure their neuroprotective activity. Three of them showed a remarkable neuroprotective profile, inhibiting the NMDA-induced neurotoxicity at subnanomolar concentrations (e.g., 5, named memagal, IC(50) = 0.28 nM).
Effect of Hurdle Technology in Food Preservation: A Review.
Singh, Shiv; Shalini, Rachana
2016-01-01
Hurdle technology is used in industrialized as well as in developing countries for the gentle but effective preservation of foods. Hurdle technology was developed several years ago as a new concept for the production of safe, stable, nutritious, tasty, and economical foods. Previously hurdle technology, i.e., a combination of preservation methods, was used empirically without much knowledge of the governing principles. The intelligent application of hurdle technology has become more prevalent now, because the principles of major preservative factors for foods (e.g., temperature, pH, aw, Eh, competitive flora), and their interactions, became better known. Recently, the influence of food preservation methods on the physiology and behavior of microorganisms in foods, i.e. their homeostasis, metabolic exhaustion, stress reactions, are taken into account, and the novel concept of multi-target food preservation emerged. The present contribution reviews the concept of the potential hurdles for foods, the hurdle effect, and the hurdle technology for the prospects of the future goal of a multi-target preservation of foods.
NASA Astrophysics Data System (ADS)
Ebtehaj, Isa; Bonakdari, Hossein; Khoshbin, Fatemeh
2016-10-01
To determine the minimum velocity required to prevent sedimentation, six different models were proposed to estimate the densimetric Froude number (Fr). The dimensionless parameters of the models were applied along with a combination of the group method of data handling (GMDH) and the multi-target genetic algorithm. Therefore, an evolutionary design of the generalized GMDH was developed using a genetic algorithm with a specific coding scheme so as not to restrict connectivity configurations to abutting layers only. In addition, a new preserving mechanism by the multi-target genetic algorithm was utilized for the Pareto optimization of GMDH. The results indicated that the most accurate model was the one that used the volumetric concentration of sediment (CV), relative hydraulic radius (d/R), dimensionless particle number (Dgr) and overall sediment friction factor (λs) in estimating Fr. Furthermore, the comparison between the proposed method and traditional equations indicated that GMDH is more accurate than existing equations.
NASA Astrophysics Data System (ADS)
Taubenböck, H.; Wurm, M.; Netzband, M.; Zwenzner, H.; Roth, A.; Rahman, A.; Dech, S.
2011-02-01
Estimating flood risks and managing disasters combines knowledge in climatology, meteorology, hydrology, hydraulic engineering, statistics, planning and geography - thus a complex multi-faceted problem. This study focuses on the capabilities of multi-source remote sensing data to support decision-making before, during and after a flood event. With our focus on urbanized areas, sample methods and applications show multi-scale products from the hazard and vulnerability perspective of the risk framework. From the hazard side, we present capabilities with which to assess flood-prone areas before an expected disaster. Then we map the spatial impact during or after a flood and finally, we analyze damage grades after a flood disaster. From the vulnerability side, we monitor urbanization over time on an urban footprint level, classify urban structures on an individual building level, assess building stability and quantify probably affected people. The results show a large database for sustainable development and for developing mitigation strategies, ad-hoc coordination of relief measures and organizing rehabilitation.
Scalable Metadata Management for a Large Multi-Source Seismic Data Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J. M.; Dodge, D. A.; Magana-Zook, S. A.
In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.
Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya
2014-01-01
Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid kinetics/dynamics, literature-recorded pathways and transcription factor (TF) information. PMID:25162401
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m(-2) d(-1) and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m(-2) d(-1) and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution.
Cui, Tianxiang; Wang, Yujie; Sun, Rui; Qiao, Chen; Fan, Wenjie; Jiang, Guoqing; Hao, Lvyuan; Zhang, Lei
2016-01-01
Estimating gross primary production (GPP) and net primary production (NPP) are significant important in studying carbon cycles. Using models driven by multi-source and multi-scale data is a promising approach to estimate GPP and NPP at regional and global scales. With a focus on data that are openly accessible, this paper presents a GPP and NPP model driven by remotely sensed data and meteorological data with spatial resolutions varying from 30 m to 0.25 degree and temporal resolutions ranging from 3 hours to 1 month, by integrating remote sensing techniques and eco-physiological process theories. Our model is also designed as part of the Multi-source data Synergized Quantitative (MuSyQ) Remote Sensing Production System. In the presented MuSyQ-NPP algorithm, daily GPP for a 10-day period was calculated as a product of incident photosynthetically active radiation (PAR) and its fraction absorbed by vegetation (FPAR) using a light use efficiency (LUE) model. The autotrophic respiration (Ra) was determined using eco-physiological process theories and the daily NPP was obtained as the balance between GPP and Ra. To test its feasibility at regional scales, our model was performed in an arid and semi-arid region of Heihe River Basin, China to generate daily GPP and NPP during the growing season of 2012. The results indicated that both GPP and NPP exhibit clear spatial and temporal patterns in their distribution over Heihe River Basin during the growing season due to the temperature, water and solar influx conditions. After validated against ground-based measurements, MODIS GPP product (MOD17A2H) and results reported in recent literature, we found the MuSyQ-NPP algorithm could yield an RMSE of 2.973 gC m-2 d-1 and an R of 0.842 when compared with ground-based GPP while an RMSE of 8.010 gC m-2 d-1 and an R of 0.682 can be achieved for MODIS GPP, the estimated NPP values were also well within the range of previous literature, which proved the reliability of our modelling results. This research suggested that the utilization of multi-source data with various scales would help to the establishment of an appropriate model for calculating GPP and NPP at regional scales with relatively high spatial and temporal resolution. PMID:27088356
NASA Astrophysics Data System (ADS)
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.
2016-10-01
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.
2016-01-01
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community. PMID:27782207
Nifadkar, Sushil S; Bauer, Talya N
2016-01-01
Previous studies of newcomer socialization have underlined the importance of newcomers' information seeking for their adjustment to the organization, and the conflict literature has consistently reported negative effects of relationship conflict with coworkers. However, to date, no study has examined the consequences of relationship conflict on newcomers' information seeking. In this study, we examined newcomers' reactions when they have relationship conflict with their coworkers, and hence cannot obtain necessary information from them. Drawing upon belongingness theory, we propose a model that moves from breach of belongingness to its proximal and distal consequences, to newcomer information seeking, and then to task-related outcomes. In particular, we propose that second paths exist-first coworker-centric and the other supervisor-centric-that may have simultaneous yet contrasting influence on newcomer adjustment. To test our model, we employ a 3-wave data collection research design with egocentric and Likert-type multisource surveys among a sample of new software engineers and their supervisors working in India. This study contributes to the field by linking the literatures on relationship conflict and newcomer information seeking and suggesting that despite conflict with coworkers, newcomers may succeed in organizations by building relationships with and obtaining information from supervisors. (c) 2016 APA, all rights reserved).
New geomorphic data on the active Taiwan orogen: A multisource approach
NASA Technical Reports Server (NTRS)
Deffontaines, B.; Lee, J.-C.; Angelier, J.; Carvalho, J.; Rudant, J.-P.
1994-01-01
A multisource and multiscale approach of Taiwan morphotectonics combines different complementary geomorphic analyses based on a new elevation model (DEM), side-looking airborne radar (SLAR), and satellite (SPOT) imagery, aerial photographs, and control from independent field data. This analysis enables us not only to present an integrated geomorphic description of the Taiwan orogen but also to highlight some new geodynamic aspects. Well-known, major geological structures such as the Longitudinal Valley, Lishan, Pingtung, and the Foothills fault zones are of course clearly recognized, but numerous, previously unrecognized structures appear distributed within different regions of Taiwan. For instance, transfer fault zones within the Western Foothills and the Central Range are identified based on analyses of lineaments and general morphology. In many cases, the existence of geomorphic features identified in general images is supported by the results of geological field analyses carried out independently. In turn, the field analyses of structures and mechanisms at some sites provide a key for interpreting similar geomorphic featues in other areas. Examples are the conjugate pattern of strike-slip faults within the Central Range and the oblique fold-and-thrust pattern of the Coastal Range. Furthermore, neotectonic and morphological analyses (drainage and erosional surfaces) has been combined in order to obtain a more comprehensive description and interpretation of neotectonic features in Taiwan, such as for the Longitudinal Valley Fault. Next, at a more general scale, numerical processing of digital elevation models, resulting in average topography, summit level or base level maps, allows identification of major features related to the dynamics of uplift and erosion and estimates of erosion balance. Finally, a preliminary morphotectonic sketch map of Taiwan, combining information from all the sources listed above, is presented.
DOT National Transportation Integrated Search
2012-03-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Small Scale Multisource Site – Hydrogeology Investigation
A site impacted by brackish water was evaluated using traditional hydrogeologic and geochemical site characterization techniques. No single, specific source of the brine impacted ground water was identified. However, the extent of the brine impacted ground water was found to be...
Gagnier, Kristin Michod; Dickinson, Christopher A.; Intraub, Helene
2015-01-01
Observers frequently remember seeing more of a scene than was shown (boundary extension). Does this reflect a lack of eye fixations to the boundary region? Single-object photographs were presented for 14–15 s each. Main objects were either whole or slightly cropped by one boundary, creating a salient marker of boundary placement. All participants expected a memory test, but only half were informed that boundary memory would be tested. Participants in both conditions made multiple fixations to the boundary region and the cropped region during study. Demonstrating the importance of these regions, test-informed participants fixated them sooner, longer, and more frequently. Boundary ratings (Experiment 1) and border adjustment tasks (Experiments 2–4) revealed boundary extension in both conditions. The error was reduced, but not eliminated, in the test-informed condition. Surprisingly, test knowledge and multiple fixations to the salient cropped region, during study and at test, were insufficient to overcome boundary extension on the cropped side. Results are discussed within a traditional visual-centric framework versus a multisource model of scene perception. PMID:23547787
Michod Gagnier, Kristin; Dickinson, Christopher A; Intraub, Helene
2013-01-01
Observers frequently remember seeing more of a scene than was shown (boundary extension). Does this reflect a lack of eye fixations to the boundary region? Single-object photographs were presented for 14-15 s each. Main objects were either whole or slightly cropped by one boundary, creating a salient marker of boundary placement. All participants expected a memory test, but only half were informed that boundary memory would be tested. Participants in both conditions made multiple fixations to the boundary region and the cropped region during study. Demonstrating the importance of these regions, test-informed participants fixated them sooner, longer, and more frequently. Boundary ratings (Experiment 1) and border adjustment tasks (Experiments 2-4) revealed boundary extension in both conditions. The error was reduced, but not eliminated, in the test-informed condition. Surprisingly, test knowledge and multiple fixations to the salient cropped region, during study and at test, were insufficient to overcome boundary extension on the cropped side. Results are discussed within a traditional visual-centric framework versus a multisource model of scene perception.
Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai
2015-01-01
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615
Composable Analytic Systems for next-generation intelligence analysis
NASA Astrophysics Data System (ADS)
DiBona, Phil; Llinas, James; Barry, Kevin
2015-05-01
Lockheed Martin Advanced Technology Laboratories (LM ATL) is collaborating with Professor James Llinas, Ph.D., of the Center for Multisource Information Fusion at the University at Buffalo (State of NY), researching concepts for a mixed-initiative associate system for intelligence analysts to facilitate reduced analysis and decision times while proactively discovering and presenting relevant information based on the analyst's needs, current tasks and cognitive state. Today's exploitation and analysis systems have largely been designed for a specific sensor, data type, and operational context, leading to difficulty in directly supporting the analyst's evolving tasking and work product development preferences across complex Operational Environments. Our interactions with analysts illuminate the need to impact the information fusion, exploitation, and analysis capabilities in a variety of ways, including understanding data options, algorithm composition, hypothesis validation, and work product development. Composable Analytic Systems, an analyst-driven system that increases flexibility and capability to effectively utilize Multi-INT fusion and analytics tailored to the analyst's mission needs, holds promise to addresses the current and future intelligence analysis needs, as US forces engage threats in contested and denied environments.
NASA Astrophysics Data System (ADS)
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco P.; Pasquariello, Guido
2018-03-01
High-resolution, remotely sensed images of the Earth surface have been proven to be of help in producing detailed flood maps, thanks to their synoptic overview of the flooded area and frequent revisits. However, flood scenarios can be complex situations, requiring the integration of different data in order to provide accurate and robust flood information. Several processing approaches have been recently proposed to efficiently combine and integrate heterogeneous information sources. In this paper, we introduce DAFNE, a Matlab®-based, open source toolbox, conceived to produce flood maps from remotely sensed and other ancillary information, through a data fusion approach. DAFNE is based on Bayesian Networks, and is composed of several independent modules, each one performing a different task. Multi-temporal and multi-sensor data can be easily handled, with the possibility of following the evolution of an event through multi-temporal output flood maps. Each DAFNE module can be easily modified or upgraded to meet different user needs. The DAFNE suite is presented together with an example of its application.
[Real-time detection of quality of Chinese materia medica: strategy of NIR model evaluation].
Wu, Zhi-sheng; Shi, Xin-yuan; Xu, Bing; Dai, Xing-xing; Qiao, Yan-jiang
2015-07-01
The definition of critical quality attributes of Chinese materia medica ( CMM) was put forward based on the top-level design concept. Nowadays, coupled with the development of rapid analytical science, rapid assessment of critical quality attributes of CMM was firstly carried out, which was the secondary discipline branch of CMM. Taking near infrared (NIR) spectroscopy as an example, which is a rapid analytical technology in pharmaceutical process over the past decade, systematic review is the chemometric parameters in NIR model evaluation. According to the characteristics of complexity of CMM and trace components analysis, a multi-source information fusion strategy of NIR model was developed for assessment of critical quality attributes of CMM. The strategy has provided guideline for NIR reliable analysis in critical quality attributes of CMM.
Parke, Michael R; Seo, Myeong-Gu; Sherf, Elad N
2015-05-01
Although past research has identified the effects of emotional intelligence on numerous employee outcomes, the relationship between emotional intelligence and creativity has not been well established. We draw upon affective information processing theory to explain how two facets of emotional intelligence-emotion regulation and emotion facilitation-shape employee creativity. Specifically, we propose that emotion regulation ability enables employees to maintain higher positive affect (PA) when faced with unique knowledge processing requirements, while emotion facilitation ability enables employees to use their PA to enhance their creativity. We find support for our hypotheses using a multimethod (ability test, experience sampling, survey) and multisource (archival, self-reported, supervisor-reported) research design of early career managers across a wide range of jobs. (c) 2015 APA, all rights reserved.
Multi-Target Tracking for Swarm vs. Swarm UAV Systems
2012-09-01
Uhlmann, “Using covariance intersection for SLAM,” Robotics and Autonomous Systems, vol. 55, pp. 3–20, Jan. 2007. [10] R. B. G. Wolfgang Niehsen... Krause , J. Leskovec, and C. Guestrin, “Data association for topic intensity track- ing,” Proceedings of the 23rd international conference on Machine
DOT National Transportation Integrated Search
2018-01-01
Connected vehicle mobility applications are commonly referred to as dynamic mobility applications (DMAs). DMAs seek to fully leverage frequently collected and rapidly disseminated multi-source data gathered from connected travelers, vehicles, and inf...
Advancing Future Network Science through Content Understanding
2014-05-01
BitTorrent, PostgreSQL, MySQL , and GRSecurity) and emerging technologies (HadoopDFS, Tokutera, Sector/Sphere, HBase, and other BigTable-like...result. • Multi-Source Network Pulse Analyzer and Correlator provides course of action planning by enhancing the understanding of the complex dynamics
DOT National Transportation Integrated Search
2012-08-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
DOT National Transportation Integrated Search
2011-11-01
In support of USDOTs Intelligent Transportation Systems (ITS) Mobility Program, the Dynamic Mobility Applications (DMA) program seeks to create applications that fully leverage frequently collected and rapidly disseminated multi-source data gat...
Feedback data sources that inform physician self-assessment.
Lockyer, Jocelyn; Armson, Heather; Chesluk, Benjamin; Dornan, Timothy; Holmboe, Eric; Loney, Elaine; Mann, Karen; Sargeant, Joan
2011-01-01
Self-assessment is a process of interpreting data about one's performance and comparing it to explicit or implicit standards. To examine the external data sources physicians used to monitor themselves. Focus groups were conducted with physicians who participated in three practice improvement activities: a multisource feedback program; a program providing patient and chart audit data; and practice-based learning groups. We used grounded theory strategies to understand the external sources that stimulated self-assessment and how they worked. Data from seven focus groups (49 physicians) were analyzed. Physicians used information from structured programs, other educational activities, professional colleagues, and patients. Data were of varying quality, often from non-formal sources with implicit (not explicit) standards. Mandatory programs elicited variable responses, whereas data and activities the physicians selected themselves were more likely to be accepted. Physicians used the information to create a reference point against which they could weigh their performance using it variably depending on their personal interpretation of its accuracy, application, and utility. Physicians use and interpret data and standards of varying quality to inform self-assessment. Physicians may benefit from regular and routine feedback and guidance on how to seek out data for self-assessment.
Evidence Combination From an Evolutionary Game Theory Perspective
Deng, Xinyang; Han, Deqiang; Dezert, Jean; Deng, Yong; Shyr, Yu
2017-01-01
Dempster-Shafer evidence theory is a primary methodology for multi-source information fusion because it is good at dealing with uncertain information. This theory provides a Dempster’s rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multi-evidence system. Within the proposed ECR, we develop a Jaccard matrix game (JMG) to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution’s stability and convergence, have been mathematically proved as well. PMID:26285231
Prince, Mark; Lester, Lynn; Chiniwala, Rupal; Berger, Barry
2017-01-01
AIM To determine the uptake of noninvasive multitarget stool DNA (mt-sDNA) in a cohort of colorectal cancer (CRC) screening non-compliant average-risk Medicare patients. METHODS This cross sectional primary care office-based study examined mt-sDNA uptake in routine clinical practice among 393 colorectal cancer screening non-compliant Medicare patients ages 50-85 ordered by 77 physicians in a multispecialty group practice (USMD Physician Services, Dallas, TX) from October, 2014-September, 2015. Investigators performed a Health Insurance Portability and Accountability Act compliant retrospective review of electronic health records to identify mt-sDNA use in patients who were either > 10 years since last colonoscopy and/or > 1 year since last fecal occult blood test. Test positive patients were advised to get diagnostic colonoscopy and thereafter patients were characterized by the most clinically significant lesion documented on histopathology of biopsies or excisional tissue. Descriptive statistics were employed. Key outcome measures included mt-sDNA compliance and diagnostic colonoscopy compliance on positive cases. RESULTS Over 12 mo, 77 providers ordered 393 mt-sDNA studies with 347 completed (88.3% compliance). Patient mean age was 69.8 (50-85) and patients were 64% female. Mt-sDNA was negative in 85.3% (296/347) and positive in 14.7% (51/347). Follow-up colonoscopy was performed in 49 positive patients (96.1% colonoscopy compliance) with two patients lost to follow up. Index findings included: colon cancer (4/49, 8.2%), advanced adenomas (21/49, 42.9%), non-advanced adenomas (15/49, 30.6%), and negative results (9/49, 18.4%). The positive predictive value for advanced colorectal lesions was 51.0% and for any colorectal neoplasia was 81.6%. The mean age of patients with colorectal cancer was 70.3 and all CRC's were localized Stage I (2) and Stage II (2), three were located in the proximal colon and one was located in the distal colon. CONCLUSION Mt-sDNA provided medical benefit to screening noncompliant Medicare population. High compliance with mt-sDNA and subsequent follow-up diagnostic colonoscopy identified patients with clinically critical advanced colorectal neoplasia. PMID:28210082
Prince, Mark; Lester, Lynn; Chiniwala, Rupal; Berger, Barry
2017-01-21
To determine the uptake of noninvasive multitarget stool DNA (mt-sDNA) in a cohort of colorectal cancer (CRC) screening non-compliant average-risk Medicare patients. This cross sectional primary care office-based study examined mt-sDNA uptake in routine clinical practice among 393 colorectal cancer screening non-compliant Medicare patients ages 50-85 ordered by 77 physicians in a multispecialty group practice (USMD Physician Services, Dallas, TX) from October, 2014-September, 2015. Investigators performed a Health Insurance Portability and Accountability Act compliant retrospective review of electronic health records to identify mt-sDNA use in patients who were either > 10 years since last colonoscopy and/or > 1 year since last fecal occult blood test. Test positive patients were advised to get diagnostic colonoscopy and thereafter patients were characterized by the most clinically significant lesion documented on histopathology of biopsies or excisional tissue. Descriptive statistics were employed. Key outcome measures included mt-sDNA compliance and diagnostic colonoscopy compliance on positive cases. Over 12 mo, 77 providers ordered 393 mt-sDNA studies with 347 completed (88.3% compliance). Patient mean age was 69.8 (50-85) and patients were 64% female. Mt-sDNA was negative in 85.3% (296/347) and positive in 14.7% (51/347). Follow-up colonoscopy was performed in 49 positive patients (96.1% colonoscopy compliance) with two patients lost to follow up. Index findings included: colon cancer (4/49, 8.2%), advanced adenomas (21/49, 42.9%), non-advanced adenomas (15/49, 30.6%), and negative results (9/49, 18.4%). The positive predictive value for advanced colorectal lesions was 51.0% and for any colorectal neoplasia was 81.6%. The mean age of patients with colorectal cancer was 70.3 and all CRC's were localized Stage I (2) and Stage II (2), three were located in the proximal colon and one was located in the distal colon. Mt-sDNA provided medical benefit to screening noncompliant Medicare population. High compliance with mt-sDNA and subsequent follow-up diagnostic colonoscopy identified patients with clinically critical advanced colorectal neoplasia.
NASA Astrophysics Data System (ADS)
Camporese, M.; Botto, A.
2017-12-01
Data assimilation is becoming increasingly popular in hydrological and earth system modeling, as it allows for direct integration of multisource observation data in modeling predictions and uncertainty reduction. For this reason, data assimilation has been recently the focus of much attention also for integrated surface-subsurface hydrological models, whereby multiple terrestrial compartments (e.g., snow cover, surface water, groundwater) are solved simultaneously, in an attempt to tackle environmental problems in a holistic approach. Recent examples include the joint assimilation of water table, soil moisture, and river discharge measurements in catchment models of coupled surface-subsurface flow using the ensemble Kalman filter (EnKF). Although the EnKF has been specifically developed to deal with nonlinear models, integrated hydrological models based on the Richards equation still represent a challenge, due to strong nonlinearities that may significantly affect the filter performance. Thus, more studies are needed to investigate the capabilities of EnKF to correct the system state and identify parameters in cases where the unsaturated zone dynamics are dominant. Here, the model CATHY (CATchment HYdrology) is applied to reproduce the hydrological dynamics observed in an experimental hillslope, equipped with tensiometers, water content reflectometer probes, and tipping bucket flow gages to monitor the hillslope response to a series of artificial rainfall events. We assimilate pressure head, soil moisture, and subsurface outflow with EnKF in a number of assimilation scenarios and discuss the challenges, issues, and tradeoffs arising from the assimilation of multisource data in a real-world test case, with particular focus on the capability of DA to update the subsurface parameters.
NASA Astrophysics Data System (ADS)
Zhao, Junsan; Chen, Guoping; Yuan, Lei
2017-04-01
The new technologies, such as 3D laser scanning, InSAR, GNSS, unmanned aerial vehicle and Internet of things, will provide much more data resources for the surveying and monitoring, as well as the development of Early Warning System (EWS). This paper provides the solutions of the design and implementation of a geological disaster monitoring and early warning system (GDMEWS), which includes landslides and debris flows hazard, based on the multi-sources of the date by use of technologies above mentioned. The complex and changeable characteristics of the GDMEWS are described. The architecture of the system, composition of the multi-source database, development mode and service logic, the methods and key technologies of system development are also analyzed. To elaborate the process of the implementation of the GDMEWS, Deqin Tibetan County is selected as a case study area, which has the unique terrain and diverse types of typical landslides and debris flows. Firstly, the system functional requirements, monitoring and forecasting models of the system are discussed. Secondly, the logic relationships of the whole process of disaster including pre-disaster, disaster rescue and post-disaster reconstruction are studied, and the support tool for disaster prevention, disaster reduction and geological disaster management are developed. Thirdly, the methods of the multi - source monitoring data integration and the generation of the mechanism model of Geological hazards and simulation are expressed. Finally, the construction of the GDMEWS is issued, which will be applied to management, monitoring and forecasting of whole disaster process in real-time and dynamically in Deqin Tibetan County. Keywords: multi-source spatial data; geological disaster; monitoring and warning system; Deqin Tibetan County
Using the 360 degrees multisource feedback model to evaluate teaching and professionalism.
Berk, Ronald A
2009-12-01
Student ratings have dominated as the primary and, frequently, only measure of teaching performance at colleges and universities for the past 50 years. Recently, there has been a trend toward augmenting those ratings with other data sources to broaden and deepen the evidence base. The 360 degrees multisource feedback (MSF) model used in management and industry for half a century and in clinical medicine for the last decade seemed like a best fit to evaluate teaching performance and professionalism. To adapt the 360 degrees MSF model to the assessment of teaching performance and professionalism of medical school faculty. The salient characteristics of the MSF models in industry and medicine were extracted from the literature. These characteristics along with 14 sources of evidence from eight possible raters, including students, self, peers, outside experts, mentors, alumni, employers, and administrators, based on the research in higher education were adapted to formative and summative decisions. Three 360 degrees MSF models were generated for three different decisions: (1) formative decisions and feedback about teaching improvement; (2) summative decisions and feedback for merit pay and contract renewal; and (3) formative decisions and feedback about professional behaviors in the academic setting. The characteristics of each model were listed. Finally, a top-10 list of the most persistent and, perhaps, intractable psychometric issues in executing these models was suggested to guide future research. The 360 degrees MSF model appears to be a useful framework for implementing a multisource evaluation of faculty teaching performance and professionalism in medical schools. This model can provide more accurate, reliable, fair, and equitable decisions than the one based on just a single source.
Ostro, Bart; Tobias, Aurelio; Querol, Xavier; Alastuey, Andrés; Amato, Fulvio; Pey, Jorge; Pérez, Noemí; Sunyer, Jordi
2011-12-01
Dozens of studies link acute exposure to particulate matter (PM) air pollution with premature mortality and morbidity, but questions remain about which species and sources in the vast PM mixture are responsible for the observed health effects. Although a few studies exist on the effects of species and sources in U.S. cities, European cities-which have a higher proportion of diesel engines and denser urban populations-have not been well characterized. Information on the effects of specific sources could aid in targeting pollution control and in articulating the biological mechanisms of PM. Our study examined the effects of various PM sources on daily mortality for 2003 through 2007 in Barcelona, a densely populated city in the northeast corner of Spain. Source apportionment for PM ≤ 2.5 μm and ≤ 10 µm in aerodynamic diameter (PM2.5 and PM10) using positive matrix factorization identified eight different factors. Case-crossover regression analysis was used to estimate the effects of each factor. Several sources of PM2.5, including vehicle exhaust, fuel oil combustion, secondary nitrate/organics, minerals, secondary sulfate/organics, and road dust, had statistically significant associations (p < 0.05) with all-cause and cardiovascular mortality. Also, in some cases relative risks for a respective interquartile range increase in concentration were higher for specific sources than for total PM2.5 mass. These results along with those from our multisource models suggest that traffic, sulfate from shipping and long-range transport, and construction dust are important contributors to the adverse health effects linked to PM.
NASA Astrophysics Data System (ADS)
Velpuri, N. M.; Senay, G. B.; Rowland, J.; Budde, M. E.; Verdin, J. P.
2015-12-01
Continental Africa has the largest volume of water stored in wetlands, large lakes, reservoirs and rivers, yet it suffers with problems such as water availability and access. Furthermore, African countries are amongst the most vulnerable to the impact of natural hazards such as droughts and floods. With climate change intensifying the hydrologic cycle and altering the distribution and frequency of rainfall, the problem of water availability and access is bound to increase. The U.S Geological Survey Famine Early Warning Systems Network (FEWS NET), funded by the U.S. Agency for International Development, has initiated a large-scale project to monitor small to medium surface water bodies in Africa. Under this project, multi-source satellite data and hydrologic modeling techniques are integrated to monitor these water bodies in Africa. First, small water bodies are mapped using satellite data such as Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Landsat, and high resolution Google Earth imagery. Stream networks and watersheds for each water body are identified using Shuttle Radar Topography Mission (SRTM) digital elevation data. Finally, a hydrologic modeling approach that uses satellite-derived precipitation estimates and evapotranspiration data calculated from global data assimilation system climate parameters is applied to model water levels. This approach has been implemented to monitor nearly 300 small water bodies located in 10 countries in sub-Saharan Africa. Validation of modeled scaled depths with field-installed gauge data in East Africa demonstrated the ability of the model to capture both the spatial patterns and seasonal variations. Modeled scaled estimates captured up to 60% of the observed gauge variability with an average RMSE of 22%. Current and historic data (since 2001) on relative water level, precipitation, and evapotranspiration for each water body is made available in near real time. The water point monitoring network will be further expanded to cover other pastoral regions of sub-Saharan Africa. This project provides timely information on water availability that supports FEWS NET monitoring activities in Africa. Information on water availability produced in this study would further increase the resilience of local communities to floods and droughts.
Velpuri, N.M.; Senay, G.B.; Asante, K.O.
2011-01-01
Managing limited surface water resources is a great challenge in areas where ground-based data are either limited or unavailable. Direct or indirect measurements of surface water resources through remote sensing offer several advantages of monitoring in ungauged basins. A physical based hydrologic technique to monitor lake water levels in ungauged basins using multi-source satellite data such as satellite-based rainfall estimates, modelled runoff, evapotranspiration, a digital elevation model, and other data is presented. This approach is applied to model Lake Turkana water levels from 1998 to 2009. Modelling results showed that the model can reasonably capture all the patterns and seasonal variations of the lake water level fluctuations. A composite lake level product of TOPEX/Poseidon, Jason-1, and ENVISAT satellite altimetry data is used for model calibration (1998-2000) and model validation (2001-2009). Validation results showed that model-based lake levels are in good agreement with observed satellite altimetry data. Compared to satellite altimetry data, the Pearson's correlation coefficient was found to be 0.81 during the validation period. The model efficiency estimated using NSCE is found to be 0.93, 0.55 and 0.66 for calibration, validation and combined periods, respectively. Further, the model-based estimates showed a root mean square error of 0.62 m and mean absolute error of 0.46 m with a positive mean bias error of 0.36 m for the validation period (2001-2009). These error estimates were found to be less than 15 % of the natural variability of the lake, thus giving high confidence on the modelled lake level estimates. The approach presented in this paper can be used to (a) simulate patterns of lake water level variations in data scarce regions, (b) operationally monitor lake water levels in ungauged basins, (c) derive historical lake level information using satellite rainfall and evapotranspiration data, and (d) augment the information provided by the satellite altimetry systems on changes in lake water levels. ?? Author(s) 2011.
ERIC Educational Resources Information Center
Frederiksen, H. Allan
In the belief that "the spread of technological development and the attendant rapidly changing environment creates the necessity for multi-source feedback systems to maximize the alternatives available in dealing with global problems," the author shows how to participate in the process of alternate video. He offers detailed information…
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Multisource oil spill detection
NASA Astrophysics Data System (ADS)
Salberg, Arnt B.; Larsen, Siri O.; Zortea, Maciel
2013-10-01
In this paper we discuss how multisource data (wind, ocean-current, optical, bathymetric, automatic identification systems (AIS)) may be used to improve oil spill detection in SAR images, with emphasis on the use of automatic oil spill detection algorithms. We focus particularly on AIS, optical, and bathymetric data. For the AIS data we propose an algorithm for integrating AIS ship tracks into automatic oil spill detection in order to improve the confidence estimate of a potential oil spill. We demonstrate the use of ancillary data on a set of SAR images. Regarding the use of optical data, we did not observe a clear correspondence between high chlorophyll values (estimated from products derived from optical data) and observed slicks in the SAR image. Bathymetric data was shown to be a good data source for removing false detections caused by e.g. sand banks on low tide. For the AIS data we observed that a polluter could be identified for some dark slicks, however, a precise oil drift model is needed in order to identify the polluter with high certainty.
Sadick, Neil S; Sato, Masaki; Palmisano, Diana; Frank, Ido; Cohen, Hila; Harth, Yoram
2011-10-01
Acne scars are one of the most difficult disorders to treat in dermatology. The optimal treatment system will provide minimal downtime resurfacing for the epidermis and non-ablative deep volumetric heating for collagen remodeling in the dermis. A novel therapy system (EndyMed Ltd., Cesarea, Israel) uses phase-controlled multi-source radiofrequency (RF) to provide simultaneous one pulse microfractional resurfacing with simultaneous volumetric skin tightening. The study included 26 subjects (Fitzpatrick's skin type 2-5) with moderate to severe wrinkles and 4 subjects with depressed acne scars. Treatment was repeated each month up to a total of three treatment sessions. Patients' photographs were graded according to accepted scales by two uninvolved blinded evaluators. Significant reduction in the depth of wrinkles and acne scars was noted 4 weeks after therapy with further improvement at the 3-month follow-up. Our data show the histological impact and clinical beneficial effects of simultaneous RF fractional microablation and volumetric deep dermal heating for the treatment of wrinkles and acne scars.
Multisource feedback, human capital, and the financial performance of organizations.
Kim, Kyoung Yong; Atwater, Leanne; Patel, Pankaj C; Smither, James W
2016-11-01
We investigated the relationship between organizations' use of multisource feedback (MSF) programs and their financial performance. We proposed a moderated mediation framework in which the employees' ability and knowledge sharing mediate the relationship between MSF and organizational performance and the purpose for which MSF is used moderates the relationship of MSF with employees' ability and knowledge sharing. With a sample of 253 organizations representing 8,879 employees from 2005 to 2007 in South Korea, we found that MSF had a positive effect on organizational financial performance via employees' ability and knowledge sharing. We also found that when MSF was used for dual purpose (both administrative and developmental purposes), the relationship between MSF and knowledge sharing was stronger, and this interaction carried through to organizational financial performance. However, the purpose of MSF did not moderate the relationship between MSF and employees' ability. The theoretical relevance and practical implications of the findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Multi-source recruitment strategies for advancing addiction recovery research beyond treated samples
Subbaraman, Meenakshi Sabina; Laudet, Alexandre B.; Ritter, Lois A.; Stunz, Aina; Kaskutas, Lee Ann
2014-01-01
Background The lack of established sampling frames makes reaching individuals in recovery from substance problems difficult. Although general population studies are most generalizable, the low prevalence of individuals in recovery makes this strategy costly and inefficient. Though more efficient, treatment samples are biased. Aims To describe multi-source recruitment for capturing participants from heterogeneous pathways to recovery; assess which sources produced the most respondents within subgroups; and compare treatment and non-treatment samples to address generalizability. Results Family/friends, Craigslist, social media and non-12-step groups produced the most respondents from hard-to-reach groups, such as racial minorities and treatment-naïve individuals. Recovery organizations yielded twice as many African-Americans and more rural dwellers, while social media yielded twice as many young people than other sources. Treatment samples had proportionally fewer females and older individuals compared to non-treated samples. Conclusions Future research on recovery should utilize previously neglected recruiting strategies to maximize the representativeness of samples. PMID:26166909
Getting the Most out of PubChem for Virtual Screening
Kim, Sunghwan
2016-01-01
Introduction With the emergence of the “big data” era, the biomedical research community has great interest in exploiting publicly available chemical information for drug discovery. PubChem is an example of public databases that provide a large amount of chemical information free of charge. Areas covered This article provides an overview of how PubChem’s data, tools, and services can be used for virtual screening and reviews recent publications that discuss important aspects of exploiting PubChem for drug discovery. Expert opinion PubChem offers comprehensive chemical information useful for drug discovery. It also provides multiple programmatic access routes, which are essential to build automated virtual screening pipelines that exploit PubChem data. In addition, PubChemRDF allows users to download PubChem data and load them into a local computing facility, facilitating data integration between PubChem and other resources. PubChem resources have been used in many studies for developing bioactivity and toxicity prediction models, discovering polypharmacologic (multi-target) ligands, and identifying new macromolecule targets of compounds (for drug-repurposing or off-target side effect prediction). These studies demonstrate the usefulness of PubChem as a key resource for computer-aided drug discovery and related area. PMID:27454129
1981-07-01
ie "n.: r c:.t. ’ur wn Le r":2- ence Indistes :nac pole a n :arc loca- 40 ":Lors !AVe a sinificant effect n convergence raes.- .ahdn te poles and...LATTICE ALGORITHMS ( LADO ) 1. The Normalized AR Lattice (ARN) This algorithm implements the normalized algorithm described in [23]. The AR coefficients are
Method for the electro-addressable functionalization of electrode arrays
Harper, Jason C.; Polsky, Ronen; Dirk, Shawn M.; Wheeler, David R.; Arango, Dulce C.; Brozik, Susan M.
2015-12-15
A method for preparing an electrochemical biosensor uses bias-assisted assembly of unreactive -onium molecules on an electrode array followed by post-assembly electro-addressable conversion of the unreactive group to a chemical or biological recognition group. Electro-addressable functionalization of electrode arrays enables the multi-target electrochemical sensing of biological and chemical analytes.
Generic framework for vessel detection and tracking based on distributed marine radar image data
NASA Astrophysics Data System (ADS)
Siegert, Gregor; Hoth, Julian; Banyś, Paweł; Heymann, Frank
2018-04-01
Situation awareness is understood as a key requirement for safe and secure shipping at sea. The primary sensor for maritime situation assessment is still the radar, with the AIS being introduced as supplemental service only. In this article, we present a framework to assess the current situation picture based on marine radar image processing. Essentially, the framework comprises a centralized IMM-JPDA multi-target tracker in combination with a fully automated scheme for track management, i.e., target acquisition and track depletion. This tracker is conditioned on measurements extracted from radar images. To gain a more robust and complete situation picture, we are exploiting the aspect angle diversity of multiple marine radars, by fusing them a priori to the tracking process. Due to the generic structure of the proposed framework, different techniques for radar image processing can be implemented and compared, namely the BLOB detector and SExtractor. The overall framework performance in terms of multi-target state estimation will be compared for both methods based on a dedicated measurement campaign in the Baltic Sea with multiple static and mobile targets given.
Effect of missing data on multitask prediction methods.
de la Vega de León, Antonio; Chen, Beining; Gillet, Valerie J
2018-05-22
There has been a growing interest in multitask prediction in chemoinformatics, helped by the increasing use of deep neural networks in this field. This technique is applied to multitarget data sets, where compounds have been tested against different targets, with the aim of developing models to predict a profile of biological activities for a given compound. However, multitarget data sets tend to be sparse; i.e., not all compound-target combinations have experimental values. There has been little research on the effect of missing data on the performance of multitask methods. We have used two complete data sets to simulate sparseness by removing data from the training set. Different models to remove the data were compared. These sparse sets were used to train two different multitask methods, deep neural networks and Macau, which is a Bayesian probabilistic matrix factorization technique. Results from both methods were remarkably similar and showed that the performance decrease because of missing data is at first small before accelerating after large amounts of data are removed. This work provides a first approximation to assess how much data is required to produce good performance in multitask prediction exercises.
Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network
Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan
2014-01-01
Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
Pérez-Areales, Francisco Javier; Betari, Nibal; Viayna, Antonio; Pont, Caterina; Espargaró, Alba; Bartolini, Manuela; De Simone, Angela; Rinaldi Alvarenga, José Fernando; Pérez, Belén; Sabate, Raimon; Lamuela-Raventós, Rosa Maria; Andrisano, Vincenza; Luque, Francisco Javier; Muñoz-Torrero, Diego
2017-06-01
Simultaneous modulation of several key targets of the pathological network of Alzheimer's disease (AD) is being increasingly pursued as a promising option to fill the critical gap of efficacious drugs against this condition. A short series of compounds purported to hit multiple targets of relevance in AD has been designed, on the basis of their distinct basicities estimated from high-level quantum mechanical computations, synthesized, and subjected to assays of inhibition of cholinesterases, BACE-1, and Aβ42 and tau aggregation, of antioxidant activity, and of brain permeation. Using, as a template, a lead rhein-huprine hybrid with an interesting multitarget profile, we have developed second-generation compounds, designed by the modification of the huprine aromatic ring. Replacement by [1,8]-naphthyridine or thieno[3,2-e]pyridine systems resulted in decreased, although still potent, acetylcholinesterase or BACE-1 inhibitory activities, which are more balanced relative to their Aβ42 and tau antiaggregating and antioxidant activities. Second-generation naphthyridine- and thienopyridine-based rhein-huprine hybrids emerge as interesting brain permeable compounds that hit several crucial pathogenic factors of AD.
A sequential multi-target Mps1 phosphorylation cascade promotes spindle checkpoint signaling.
Ji, Zhejian; Gao, Haishan; Jia, Luying; Li, Bing; Yu, Hongtao
2017-01-10
The master spindle checkpoint kinase Mps1 senses kinetochore-microtubule attachment and promotes checkpoint signaling to ensure accurate chromosome segregation. The kinetochore scaffold Knl1, when phosphorylated by Mps1, recruits checkpoint complexes Bub1-Bub3 and BubR1-Bub3 to unattached kinetochores. Active checkpoint signaling ultimately enhances the assembly of the mitotic checkpoint complex (MCC) consisting of BubR1-Bub3, Mad2, and Cdc20, which inhibits the anaphase-promoting complex or cyclosome bound to Cdc20 (APC/C Cdc20 ) to delay anaphase onset. Using in vitro reconstitution, we show that Mps1 promotes APC/C inhibition by MCC components through phosphorylating Bub1 and Mad1. Phosphorylated Bub1 binds to Mad1-Mad2. Phosphorylated Mad1 directly interacts with Cdc20. Mutations of Mps1 phosphorylation sites in Bub1 or Mad1 abrogate the spindle checkpoint in human cells. Therefore, Mps1 promotes checkpoint activation through sequentially phosphorylating Knl1, Bub1, and Mad1. This sequential multi-target phosphorylation cascade makes the checkpoint highly responsive to Mps1 and to kinetochore-microtubule attachment.
Chen, Jun; Peng, Zhangzhe; Lu, Miaomiao; Xiong, Xuan; Chen, Zhuo; Li, Qianbin; Cheng, Zeneng; Jiang, Dejian; Tao, Lijian; Hu, Gaoyun
2018-01-15
Oxidative stress, inflammation and fibrosis can cause irreversible damage on cell structure and function of kidney and are key pathological factors in Diabetic Nephropathy (DN). Therefore, multi-target agents are urgently need for the clinical treatment of DN. Using Pirfenidone as a lead compound and based on the previous research, two novel series (5-trifluoromethyl)-2(1H)-pyridone analogs were designed and synthesized. SAR of (5-trifluoromethyl)-2(1H)-pyridone derivatives containing nitrogen heterocyclic ring have been established for in vitro potency. In addition, compound 8, a novel agent that act on multiple targets of anti-DN with IC 50 of 90μM in NIH3T3 cell lines, t 1/2 of 4.89±1.33h in male rats and LD 50 >2000mg/kg in mice, has been advanced to preclinical studies as an oral treatment for DN. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhu, Wei; Chen, Hui; Wang, Yulan; Wang, Jiang; Peng, Xia; Chen, Xianjie; Gao, Yinglei; Li, Chunpu; He, Yulong; Ai, Jing; Geng, Meiyu; Zheng, Mingyue; Liu, Hong
2017-07-27
A novel series of pyridin-3-amine derivatives were designed, synthesized, and evaluated as multitargeted protein kinase inhibitors for the treatment of non-small cell lung cancer (NSCLC). Hit 1 was first disclosed by in silico screening against fibroblast growth factor receptors (FGFR), which was subsequently validated by in vitro experiments. The structure-activity relationship (SAR) of its analogues was then explored to afford novel FGFR inhibitors 2a-2p and 3a-3q. Among them, 3m showed potent inhibition against FGFR1, 2, and 3. Interestingly, compound 3m not only inhibited various phosphorylation and downstream signaling across different oncogenic forms in FGFR-overactivated cancer cells but also showed nanomolar level inhibition against several other NSCLC-related oncogene kinases, including RET, EGFR, EGFR/T790M/L858R, DDR2, and ALK. Finally, in vivo pharmacology evaluations of 3m showed significant antitumor activity (TGI = 66.1%) in NCI-H1581 NSCLC xenografts with a good pharmacokinetic profile.
Farina, Roberta; Pisani, Leonardo; Catto, Marco; Nicolotti, Orazio; Gadaleta, Domenico; Denora, Nunzio; Soto-Otero, Ramon; Mendez-Alvarez, Estefania; Passos, Carolina S; Muncipinto, Giovanni; Altomare, Cosimo D; Nurisso, Alessandra; Carrupt, Pierre-Alain; Carotti, Angelo
2015-07-23
The multifactorial nature of Alzheimer's disease calls for the development of multitarget agents addressing key pathogenic processes. To this end, by following a docking-assisted hybridization strategy, a number of aminocoumarins were designed, prepared, and tested as monoamine oxidases (MAOs) and acetyl- and butyryl-cholinesterase (AChE and BChE) inhibitors. Highly flexible N-benzyl-N-alkyloxy coumarins 2-12 showed good inhibitory activities at MAO-B, AChE, and BChE but low selectivity. More rigid inhibitors, bearing meta- and para-xylyl linkers, displayed good inhibitory activities and high MAO-B selectivity. Compounds 21, 24, 37, and 39, the last two featuring an improved hydrophilic/lipophilic balance, exhibited excellent activity profiles with nanomolar inhibitory potency toward hMAO-B, high hMAO-B over hMAO-A selectivity and submicromolar potency at hAChE. Cell-based assays of BBB permeation, neurotoxicity, and neuroprotection supported the potential of compound 37 as a BBB-permeant neuroprotective agent against H2O2-induced oxidative stress with poor interaction as P-gp substrate and very low cytotoxicity.
The Molecular Basis for Dual Fatty Acid Amide Hydrolase (FAAH)/Cyclooxygenase (COX) Inhibition.
Palermo, Giulia; Favia, Angelo D; Convertino, Marino; De Vivo, Marco
2016-06-20
The design of multitarget-directed ligands is a promising strategy for discovering innovative drugs. Here, we report a mechanistic study that clarifies key aspects of the dual inhibition of the fatty acid amide hydrolase (FAAH) and the cyclooxygenase (COX) enzymes by a new multitarget-directed ligand named ARN2508 (2-[3-fluoro-4-[3-(hexylcarbamoyloxy)phenyl]phenyl]propanoic acid). This potent dual inhibitor combines, in a single scaffold, the pharmacophoric elements often needed to block FAAH and COX, that is, a carbamate moiety and the 2-arylpropionic acid functionality, respectively. Molecular modeling and molecular dynamics simulations suggest that ARN2508 uses a noncovalent mechanism of inhibition to block COXs, while inhibiting FAAH via the acetylation of the catalytic Ser241, in line with previous experimental evidence for covalent FAAH inhibition. This study proposes the molecular basis for the dual FAAH/COX inhibition by this novel hybrid scaffold, stimulating further experimental studies and offering new insights for the rational design of novel anti-inflammatory agents that simultaneously act on FAAH and COX. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
NASA Astrophysics Data System (ADS)
Zhang, X.; Wu, B.; Zhang, M.; Zeng, H.
2017-12-01
Rice is one of the main staple foods in East Asia and Southeast Asia, which has occupied more than half of the world's population with 11% of cultivated land. Study on rice can provide direct or indirect information on food security and water source management. Remote sensing has proven to be the most effective method to monitoring the cropland in large scale by using temporary and spectral information. There are two main kinds of satellite have been used to mapping rice including microwave and optical. Rice, as the main crop of paddy fields, the main feature different from other crops is flooding phenomenon at planning stage (Figure 1). Microwave satellites can penetrate through clouds and efficiency on monitoring flooding phenomenon. Meanwhile, the vegetation index based on optical satellite can well distinguish rice from other vegetation. Google Earth Engine is a cloud-based platform that makes it easy to access high-performance computing resources for processing very large geospatial datasets. Google has collected large number of remote sensing satellite data around the world, which providing researchers with the possibility of doing application by using multi-source remote sensing data in a large area. In this work, we map rice planting area in south China through integration of Landsat-8 OLI, Sentienl-2, and Sentinel-1 Synthetic Aperture Radar (SAR) images. The flowchart is shown in figure 2. First, a threshold method the VH polarized backscatter from SAR sensor and vegetation index including normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) from optical sensor were used the classify the rice extent map. The forest and water surface extent map provided by earth engine were used to mask forest and water. To overcome the problem of the "salt and pepper effect" by Pixel-based classification when the spatial resolution increased, we segment the optical image and use the pixel- based classification results to merge the object-oriented segmentation data, and finally get the rice extent map. At last, by using the time series analysis, the peak count was obtained for each rice area to ensure the crop intensity. In this work, the rice ground point from a GVG crowdsourcing smartphone and rice area statistical results from National Bureau of Statistics were used to validate and evaluate our result.
Velpuri, Naga Manohar; Senay, Gabriel B.
2012-01-01
Lake Turkana, the largest desert lake in the world, is fed by ungauged or poorly gauged river systems. To meet the demand of electricity in the East African region, Ethiopia is currently building the Gibe III hydroelectric dam on the Omo River, which supplies more than 80% of the inflows to Lake Turkana. On completion, the Gibe III dam will be the tallest dam in Africa with a height of 241 m. However, the nature of interactions and potential impacts of regulated inflows to Lake Turkana are not well understood due to its remote location and unavailability of reliable in-situ datasets. In this study, we used 12 years (1998–2009) of existing multi-source satellite and model-assimilated global weather data. We use calibrated multi-source satellite data-driven water balance model for Lake Turkana that takes into account model routed runoff, lake/reservoir evapotranspiration, direct rain on lakes/reservoirs and releases from the dam to compute lake water levels. The model evaluates the impact of Gibe III dam using three different approaches such as (a historical approach, a knowledge-based approach, and a nonparametric bootstrap resampling approach) to generate rainfall-runoff scenarios. All the approaches provided comparable and consistent results. Model results indicated that the hydrological impact of the dam on Lake Turkana would vary with the magnitude and distribution of rainfall post-dam commencement. On average, the reservoir would take up to 8–10 months, after commencement, to reach a minimum operation level of 201 m depth of water. During the dam filling period, the lake level would drop up to 2 m (95% confidence) compared to the lake level modelled without the dam. The lake level variability caused by regulated inflows after the dam commissioning were found to be within the natural variability of the lake of 4.8 m. Moreover, modelling results indicated that the hydrological impact of the Gibe III dam would depend on the initial lake level at the time of dam commencement. Areas along the Lake Turkana shoreline that are vulnerable to fluctuations in lake levels were also identified. This study demonstrates the effectiveness of using existing multi-source satellite data in a basic modeling framework to assess the potential hydrological impact of an upstream dam on a terminal downstream lake. The results obtained from this study could also be used to evaluate alternate dam-filling scenarios and assess the potential impact of the dam on Lake Turkana under different operational strategies.
Giménez-Llort, L; Ratia, M; Pérez, B; Camps, P; Muñoz-Torrero, D; Badia, A; Clos, M V
2015-06-01
The present work describes, for the first time, the in vivo effects of the multitarget compound AVCRI104P3, a new anticholinesterasic drug with potent inhibitory effects on human AChE, human BuChE and BACE-1 activities as well as on the AChE-induced and self-induced Aβ aggregation. We characterized the behavioral effects of chronic treatment with AVCRI104P3 (0.6 μmol kg(-1), i.p., 21 days) in a sample of middle aged (12-month-old) male 129/Sv×C57BL/6 mice with poor cognitive performance, as shown by the slow acquisition curves of saline-treated animals. Besides, a comparative assessment of cognitive and non-cognitive actions was done using its in vitro equipotent doses of huprine X (0.12 μmol kg(-1)), a huperzine A-tacrine hybrid. The screening assessed locomotor activity, anxiety-like behaviors, cognitive function and side effects. The results on the 'acquisition' of spatial learning and memory show that AVCRI104P3 exerted pro-cognitive effects improving both short- and long-term processes, resulting in a fast and efficient acquisition of the place task in the Morris water maze. On the other hand, a removal test and a perceptual visual learning task indicated that both AChEIs improved short-term 'memory' as compared to saline treated mice. Both drugs elicited the same response in the corner test, but only AVCRI104P3 exhibited anxiolytic-like actions in the dark/light box test. These cognitive-enhancement and anxiolytic-like effects demostrated herein using a sample of middle-aged animals and the lack of adverse effects, strongly encourage further studies on AVCRI104P3 as a promising multitarget therapeutic agent for the treatment of cholinergic dysfunction underlying natural aging and/or dementias. Copyright © 2015. Published by Elsevier B.V.
AVN-101: A Multi-Target Drug Candidate for the Treatment of CNS Disorders.
Ivachtchenko, Alexandre V; Lavrovsky, Yan; Okun, Ilya
2016-05-25
Lack of efficacy of many new highly selective and specific drug candidates in treating diseases with poorly understood or complex etiology, as are many of central nervous system (CNS) diseases, encouraged an idea of developing multi-modal (multi-targeted) drugs. In this manuscript, we describe molecular pharmacology, in vitro ADME, pharmacokinetics in animals and humans (part of the Phase I clinical studies), bio-distribution, bioavailability, in vivo efficacy, and safety profile of the multimodal drug candidate, AVN-101. We have carried out development of a next generation drug candidate with a multi-targeted mechanism of action, to treat CNS disorders. AVN-101 is a very potent 5-HT7 receptor antagonist (Ki = 153 pM), with slightly lesser potency toward 5-HT6, 5-HT2A, and 5HT-2C receptors (Ki = 1.2-2.0 nM). AVN-101 also exhibits a rather high affinity toward histamine H1 (Ki = 0.58 nM) and adrenergic α2A, α2B, and α2C (Ki = 0.41-3.6 nM) receptors. AVN-101 shows a good oral bioavailability and facilitated brain-blood barrier permeability, low toxicity, and reasonable efficacy in animal models of CNS diseases. The Phase I clinical study indicates the AVN-101 to be well tolerated when taken orally at doses of up to 20 mg daily. It does not dramatically influence plasma and urine biochemistry, nor does it prolong QT ECG interval, thus indicating low safety concerns. The primary therapeutic area for AVN-101 to be tested in clinical trials would be Alzheimer's disease. However, due to its anxiolytic and anti-depressive activities, there is a strong rational for it to also be studied in such diseases as general anxiety disorders, depression, schizophrenia, and multiple sclerosis.
AVN-101: A Multi-Target Drug Candidate for the Treatment of CNS Disorders
Ivachtchenko, Alexandre V.; Lavrovsky, Yan; Okun, Ilya
2016-01-01
Lack of efficacy of many new highly selective and specific drug candidates in treating diseases with poorly understood or complex etiology, as are many of central nervous system (CNS) diseases, encouraged an idea of developing multi-modal (multi-targeted) drugs. In this manuscript, we describe molecular pharmacology, in vitro ADME, pharmacokinetics in animals and humans (part of the Phase I clinical studies), bio-distribution, bioavailability, in vivo efficacy, and safety profile of the multimodal drug candidate, AVN-101. We have carried out development of a next generation drug candidate with a multi-targeted mechanism of action, to treat CNS disorders. AVN-101 is a very potent 5-HT7 receptor antagonist (Ki = 153 pM), with slightly lesser potency toward 5-HT6, 5-HT2A, and 5HT-2C receptors (Ki = 1.2–2.0 nM). AVN-101 also exhibits a rather high affinity toward histamine H1 (Ki = 0.58 nM) and adrenergic α2A, α2B, and α2C (Ki = 0.41–3.6 nM) receptors. AVN-101 shows a good oral bioavailability and facilitated brain-blood barrier permeability, low toxicity, and reasonable efficacy in animal models of CNS diseases. The Phase I clinical study indicates the AVN-101 to be well tolerated when taken orally at doses of up to 20 mg daily. It does not dramatically influence plasma and urine biochemistry, nor does it prolong QT ECG interval, thus indicating low safety concerns. The primary therapeutic area for AVN-101 to be tested in clinical trials would be Alzheimer’s disease. However, due to its anxiolytic and anti-depressive activities, there is a strong rational for it to also be studied in such diseases as general anxiety disorders, depression, schizophrenia, and multiple sclerosis. PMID:27232215
Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie
2011-03-22
Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking.
Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie
2011-01-01
Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking. PMID:21445339
Liu, Wei; Rabinovich, Alon; Nash, Yuval; Frenkel, Dan; Wang, Yuqiang; Youdim, Moussa B H; Weinreb, Orly
2017-02-01
Previous study demonstrated that the novel multitarget compound, MT-031 preserved in one molecule entity the beneficial properties of its parent drugs, rasagiline and rivastigmine, and exerted high dual potencies of monoamine oxidase-A (MAO-A) and cholinesterase (ChE) inhibition in acute-treated mice and neuroprotective effects against H 2 O 2 -induced neurotoxicity in human neuroblastoma SH-SY5Y cells. The present study aimed to further investigate the anti-inflammatory and protective effects of MT-031 in scopolamine mouse model and inflammatory cell cultures. Our findings demonstrated that once daily chronic administration of MT-031 (5-10 mg/kg) to mice antagonized scopolamine-induced memory and cognitive impairments, displayed brain selective MAO-A and AChE/BuChE inhibition, increased the levels of striatal dopamine (DA), serotonin (5-HT) and norepinephrine and prevented the metabolism of DA and 5-HT. In addition, MT-031 upregulated mRNA expression levels of Bcl-2, the neurotrophic factors, (e.g., brain-derived neurotrophic factor (BDNF), glial cell line-derived neurotrophic factor (GDNF) and nerve growth factor (NGF)), the antioxidant enzyme catalase and the anti-inflammatory cytokine, neurotrophic tyrosine kinase receptor (Ntrk), and down-regulated the mRNA expression levels of the pro-inflammatory interleukin (IL)-6 in scopolamine-induced mice. In accordance, MT-031 was shown to reduce reactive oxygen species accumulation, increase the levels of anti-inflammatory cytokines, IL-10 and decrease the levels of the pro-inflammatory cytokines, IL-1β, IL-6, IL-17 and interferon-gamma (IFN-γ) in activated mouse splenocytes and microglial cells. Taken together, these pharmacological properties of MT-031 can be of clinical importance for developing this novel multitarget compound as a novel drug candidate for the treatment of Alzheimer's disease. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multitarget stool DNA testing for colorectal-cancer screening.
Imperiale, Thomas F; Ransohoff, David F; Itzkowitz, Steven H; Levin, Theodore R; Lavin, Philip; Lidgard, Graham P; Ahlquist, David A; Berger, Barry M
2014-04-03
An accurate, noninvasive test could improve the effectiveness of colorectal-cancer screening. We compared a noninvasive, multitarget stool DNA test with a fecal immunochemical test (FIT) in persons at average risk for colorectal cancer. The DNA test includes quantitative molecular assays for KRAS mutations, aberrant NDRG4 and BMP3 methylation, and β-actin, plus a hemoglobin immunoassay. Results were generated with the use of a logistic-regression algorithm, with values of 183 or more considered to be positive. FIT values of more than 100 ng of hemoglobin per milliliter of buffer were considered to be positive. Tests were processed independently of colonoscopic findings. Of the 9989 participants who could be evaluated, 65 (0.7%) had colorectal cancer and 757 (7.6%) had advanced precancerous lesions (advanced adenomas or sessile serrated polyps measuring ≥1 cm in the greatest dimension) on colonoscopy. The sensitivity for detecting colorectal cancer was 92.3% with DNA testing and 73.8% with FIT (P=0.002). The sensitivity for detecting advanced precancerous lesions was 42.4% with DNA testing and 23.8% with FIT (P<0.001). The rate of detection of polyps with high-grade dysplasia was 69.2% with DNA testing and 46.2% with FIT (P=0.004); the rates of detection of serrated sessile polyps measuring 1 cm or more were 42.4% and 5.1%, respectively (P<0.001). Specificities with DNA testing and FIT were 86.6% and 94.9%, respectively, among participants with nonadvanced or negative findings (P<0.001) and 89.8% and 96.4%, respectively, among those with negative results on colonoscopy (P<0.001). The numbers of persons who would need to be screened to detect one cancer were 154 with colonoscopy, 166 with DNA testing, and 208 with FIT. In asymptomatic persons at average risk for colorectal cancer, multitarget stool DNA testing detected significantly more cancers than did FIT but had more false positive results. (Funded by Exact Sciences; ClinicalTrials.gov number, NCT01397747.).
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
An Approach for Forest Inventory in Canada's Northern Boreal region, Northwest Territories
NASA Astrophysics Data System (ADS)
Mahoney, C.; Hopkinson, C.; Hall, R.; Filiatrault, M.
2017-12-01
The northern extent of Canada's northern boreal forest is largely inaccessible resulting in logistical, financial, and human challenges with respect to obtaining concise and accurate forest resource inventory (FRI) attributes such as stand height, aboveground biomass and forest carbon stocks. This challenge is further exacerbated by mandated government resource management and reporting of key attributes with respect to assessing impacts of natural disturbances, monitoring wildlife habitat and establishing policies to mitigate effects of climate change. This study presents a framework methodology utilized to inventory canopy height and crown closure over a 420,000 km2 area in Canada's Northwest Territories (NWT) by integrating field, LiDAR and satellite remote sensing data. Attributes are propagated from available field to coincident airborne LiDAR thru to satellite laser altimetry footprints. A quality controlled form of the latter are then submitted to a k-nearest neighbor (kNN) imputation algorithm to produce a continuous map of each attribute on a 30 m grid. The resultant kNN stand height (r=0.62, p=0.00) and crown closure (r=0.64, p=0.00) products were identified as statistically similar to a comprehensive independent airborne LiDAR source. Regional uncertainty can be produced with each attribute to identify areas of potential improvement through future strategic data acquisitions or the fine tuning of model parameters. This study's framework concept was developed to inform Natural Resources Canada - Canadian Forest Service's Multisource Vegetation Inventory and update vast regions of Canada's northern forest inventories, however, its applicability can be generalized to any environment. Not only can such a framework approach incorporate other data sources (such as Synthetic Aperture Radar) to potentially better characterize forest attributes, but it can also utilize future Earth observation mission data (for example ICESat-2) to monitor forest dynamics and the status, health and sustainability of Canada's northern boreal regions as areas where detailed inventory information is typically not available.
Thiele, Ines; Hyduke, Daniel R; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan M T; Hsiung, Chao A; De Keersmaecker, Sigrid C J; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L; Shin, Sook-il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M; Zengler, Karsten; Palsson, Bernhard O; Adkins, Joshua N; Bumann, Dirk
2011-01-18
Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thiele, Ines; Hyduke, Daniel R.; Steeb, Benjamin
2011-01-01
Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of thismore » reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Finally, taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.« less
de Jonge, Jan; Peeters, Maria C W
2009-05-01
Most studies of counterproductive work behavior (CWB) are criticized for overreliance on single-source self-reports. This study attempts to triangulate on behaviors and perceptions of the work environment by linking job incumbent self-report with coworker report of the job incumbent's behaviors. Theoretical framework is the Demand-Induced Strain Compensation (DISC) Model, which proposes in general that specific job resources should match specific job demands to reduce deviant behavioral outcomes such as CWB. To test the extent to which job incumbent self-report and coworker report of CWB in health care work converge, and the extent to which job incumbent-reported work-related antecedents (i.e., job demands and job resources) similarly predict both self-reported and coworker-reported behaviors (in line with DISC theory). A cross-sectional survey with anonymous questionnaires was conducted, using data from two different sources (self-reports and coworker reports). A large organization for residential elderly care in the Northern urban area in The Netherlands. Self-report and coworker questionnaires were distributed to 123 health care workers, of which 73 people returned the self-report questionnaire (59% response rate). In addition, 66 out of 123 coworker questionnaires were returned (54% coworker response rate). In total 54 surveys of job incumbents and coworkers could be matched. Next to descriptive statistics, t-test, and correlations, hierarchical regression analyses were conducted using SPSS 15.0 for Windows. Correlations and a t-test demonstrated significant convergence between job incumbent and coworker reports of CWB. Hierarchical regression analyses showed that both job incumbent and coworker data consistently demonstrated CWB to be related to its work-related antecedents. Specifically, findings showed that both physical and emotional job resources moderated the relation between physical job demands and CWB. The current findings provide stronger evidence that (multi-source measured) CWB is associated with job demands and job resources than has been provided in the past. Moreover, the present study implies that DISC theory has the potential of making a profound contribution to our understanding of counterproductive working behaviors in health care work. Future longitudinal studies should investigate this kind of relations more intensely.
Efficient Multi-Source Data Fusion for Decentralized Sensor Networks
2006-10-01
Operating Picture (COP). Robovolc, accessing a single DDF node associated with a CCTV camera (marked in orange in Figure 3a), defends a ‘ sensitive ...Gaussian environments. Figure 10: Particle Distribution Snapshots osition error between each target and the me ed particle set at the bearing-only
Data Mining Algorithms for Classification of Complex Biomedical Data
ERIC Educational Resources Information Center
Lan, Liang
2012-01-01
In my dissertation, I will present my research which contributes to solve the following three open problems from biomedical informatics: (1) Multi-task approaches for microarray classification; (2) Multi-label classification of gene and protein prediction from multi-source biological data; (3) Spatial scan for movement data. In microarray…
Directed Vapor Deposition: Low Vacuum Materials Processing Technology
2000-01-01
constituent A Crucible with constituent B Electron beam AB Substrate Deposit Flux of A Flux of B Composition "Skull" melt Electron beam Coolant Copper ... crucible Evaporation target Evaporant material Vapor flux Fibrous Coating Surface a) b) sharp (0.5 mm) beam focussing. When used with multisource
Evaluation of Professional Role Competency during Psychiatry Residency
ERIC Educational Resources Information Center
Grujich, Nikola N.; Razmy, Ajmal; Zaretsky, Ari; Styra, Rima G.; Sockalingam, Sanjeev
2012-01-01
Objective: The authors sought to determine psychiatry residents' perceptions on the current method of evaluating professional role competency and the use of multi-source feedback (MSF) as an assessment tool. Method: Authors disseminated a structured, anonymous survey to 128 University of Toronto psychiatry residents, evaluating the current mode of…
Estimating error cross-correlations in soil moisture data sets using extended collocation analysis
USDA-ARS?s Scientific Manuscript database
Consistent global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multi-source soil moisture retrievals int...
The Effect of Surgeon Empathy and Emotional Intelligence on Patient Satisfaction
ERIC Educational Resources Information Center
Weng, Hui-Ching; Steed, James F.; Yu, Shang-Won; Liu, Yi-Ten; Hsu, Chia-Chang; Yu, Tsan-Jung; Chen, Wency
2011-01-01
We investigated the associations of surgeons' emotional intelligence and surgeons' empathy with patient-surgeon relationships, patient perceptions of their health, and patient satisfaction before and after surgical procedures. We used multi-source approaches to survey 50 surgeons and their 549 outpatients during initial and follow-up visits.…
Single Mothers of Early Adolescents: Perceptions of Competence
ERIC Educational Resources Information Center
Beckert, Troy E.; Strom, Paris S.; Strom, Robert D.; Darre, Kathryn; Weed, Ane
2008-01-01
The purpose of this study was to examine similarities and differences in single mothers' and adolescents' perceptions of parenting competencies from a developmental assets approach. A multi-source (mothers [n = 29] and 10-14-year-old adolescent children [n = 29]), single-method (both generations completed the Parent Success Indicator)…
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés
2015-09-28
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less
NASA Astrophysics Data System (ADS)
Kauweloa, Kevin Ikaika
The approximate BED (BEDA) is calculated for multi-phase cases due to current treatment planning systems (TPSs) being incapable of performing BED calculations. There has been no study on the mathematical accuracy and precision of BEDA relative to the true BED (BEDT), and how that might negatively impact patient care. The purpose of the first aim was to study the mathematical accuracy and precision in both hypothetical and clinical situations, while the next two aims were to create multi-phase BED optimization ideas for both multi-target liver stereotactic body radiation therapy (SBRT) cases, and gynecological cases where patients are treated with high-dose rate (HDR) brachytherapy along with external beam radiotherapy (EBRT). MATLAB algorithms created for this work were used to mathematically analyze the accuracy and precision of BEDA relative to BEDT in both hypothetical and clinical situations on a 3D basis. The organs-at-risk (OARs) of ten head & neck and ten prostate cancer patients were studied for the clinical situations. The accuracy of BEDA was shown to vary between OARs as well as between patients. The percentage of patients with an overall BEDA percent error less than 1% were, 50% for the Optic Chiasm and Brainstem, 70% for the Left and Right Optic Nerves, as well as the Rectum and Bladder, and 80% for the Normal Brain and Spinal Cord. As seen for each OAR among different patients, there were always cases where the percent error was greater than 1%. This is a cause for concern since the goal of radiation therapy is to reduce the overall uncertainty of treatment, and calculating BEDA distributions increases the treatment uncertainty with percent errors greater than 1%. The revealed inaccuracy and imprecision of BEDA supports the argument to use BEDT. The multi-target liver study involved applying BEDT in order to reduce the number of dose limits to one rather than have one for each fractionation scheme in multi-target liver SBRT treatments. A BEDT limit was found using the current, clinically accepted dose limits, allowing the BEDT distributions to be calculated, which could be used to determine whether at least 700 cc of the healthy liver did not receive the BEDT limit. Three previously multi-target liver cancer patients were studied. For each case, it was shown that the conventional treatment plans were relatively conservative and that more than 700 cc of the healthy liver received less than the BED T limit. These results show that greater doses can be delivered to the targets without exceeding the BEDT limit to the healthy tissue, which typically causes radiation toxicity. When applying BEDT to gynecological cases, the BEDT can reveal the relative effect each treatment would have individually hence the cumulative BEDT would better inform the physician of the potential results with the patient's treatment. The problem presented for these cases, however, is the method in summing dose distributions together when there is significant motion between treatments and the presence of applicators for the HDR phase. One way to calculate the cumulative BEDT is to use structure guided deformable image registration (SG-DIR) that only focuses on the anatomical contours, to avoid errors introduced by the applicators. Eighteen gynecological patients were studied and VelocityAI was used to perform this SG- DIR. In addition, formalism was developed to assess and characterize the remnant dose-mapping error from this approach, from the shortest distance between contour points (SDBP). The results revealed that warping errors rendered relatively large normal tissue complication probability (NTCP) values which are certainly non negligible and does render this method not clinically viable. However, a more accurate SG-DIR algorithm could improve the accuracy of BEDT distributions in these multi-phase cases.
ERIC Educational Resources Information Center
Hearon, Brittany V.
2017-01-01
Youth psychological well-being has become increasingly acknowledged as not merely the absence of psychological distress, but the presence of positive indicators of optimal functioning. Students with complete mental health (i.e., low psychopathology and high well-being) demonstrate the best academic, social, and physical health outcomes. As such,…
ERIC Educational Resources Information Center
Roth, Rachel A.; Suldo, Shannon M.; Ferron, John M.
2017-01-01
Most interventions intended to improve subjective well-being, termed "positive psychology interventions" (PPIs), have neglected to include relevant stakeholders in youth's lives and have not included booster sessions intended to maintain gains in subjective well-being. The current study investigated the impact of a multitarget,…
State-of-the-Art: DTM Generation Using Airborne LIDAR Data
Chen, Ziyue; Gao, Bingbo; Devereux, Bernard
2017-01-01
Digital terrain model (DTM) generation is the fundamental application of airborne Lidar data. In past decades, a large body of studies has been conducted to present and experiment a variety of DTM generation methods. Although great progress has been made, DTM generation, especially DTM generation in specific terrain situations, remains challenging. This research introduces the general principles of DTM generation and reviews diverse mainstream DTM generation methods. In accordance with the filtering strategy, these methods are classified into six categories: surface-based adjustment; morphology-based filtering, triangulated irregular network (TIN)-based refinement, segmentation and classification, statistical analysis and multi-scale comparison. Typical methods for each category are briefly introduced and the merits and limitations of each category are discussed accordingly. Despite different categories of filtering strategies, these DTM generation methods present similar difficulties when implemented in sharply changing terrain, areas with dense non-ground features and complicated landscapes. This paper suggests that the fusion of multi-sources and integration of different methods can be effective ways for improving the performance of DTM generation. PMID:28098810
A Geospatial Information Grid Framework for Geological Survey.
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.
A Geospatial Information Grid Framework for Geological Survey
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255
A semantic model for multimodal data mining in healthcare information systems.
Iakovidis, Dimitris; Smailis, Christos
2012-01-01
Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.
Judicialization 2.0: Understanding right-to-health litigation in real time.
Biehl, João; Socal, Mariana P; Gauri, Varun; Diniz, Debora; Medeiros, Marcelo; Rondon, Gabriela; Amon, Joseph J
2018-05-21
Over the past two decades, debate over the whys, the hows, and the effects of the ever-expanding phenomenon of right-to-health litigation ('judicialization') throughout Latin America have been marked by polarised arguments and limited information. In contrast to claims of judicialization as a positive or negative trend, less attention has been paid to ways to better understand the phenomenon in real time. In this article, we propose a new approach-Judicialization 2.0-that recognises judicialization as an integral part of democratic life. This approach seeks to expand access to information about litigation on access to medicines (and health care generally) in order to better characterise the complexity of the phenomenon and thus inform new research and more robust public discussions. Drawing from our multi-disciplinary perspectives and field experiences in highly judicialized contexts, we thus describe a new multi-source, multi-stakeholder mixed-method approach designed to capture the patterns and heterogeneity of judicialization and understand its medical and socio-political impact in real time, along with its counterfactuals. By facilitating greater data availability and open access, we can drive advancements towards transparent and participatory priority setting, as well as accountability mechanisms that promote quality universal health coverage.
NASA Astrophysics Data System (ADS)
Ma, Yuanxu; Huang, He Qing
2016-07-01
Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.
Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics
NASA Astrophysics Data System (ADS)
Iyer, V.; Shetty, S.; Iyengar, S. S.
2015-07-01
Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.
A Bayesian approach to multisource forest area estimation
Andrew O. Finley
2007-01-01
In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...
ERIC Educational Resources Information Center
Powers, Joshua B.
This study investigated institutional resource factors that may explain differential performance with university technology transfer--the process by which university research is transformed into marketable products. Using multi-source data on 108 research universities, a set of internal resources (financial, physical, human capital, and…
ERIC Educational Resources Information Center
Lans, Thomas; Biemans, Harm; Mulder, Martin; Verstegen, Jos
2010-01-01
An important assumption of entrepreneurial competence is that (at least part of) it can be learned and developed. However, human resources development (HRD) practices aimed at further strengthening and developing small-business owner-managers' entrepreneurial competence are complex and underdeveloped. A multisource assessment of owner-managers'…
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
WHO Expert Committee on Specifications for Pharmaceutical Preparations. Forty-ninth report.
2015-01-01
The Expert Committee on Specifications for Pharmaceutical Preparations works towards clear, independent and practical standards and guidelines for the quality assurance of medicines. Standards are developed by the Committee through worldwide consultation and an international consensus-building process. The following new guidelines were adopted and recommended for use. Revised procedure for the development of monographs and other texts for The International Pharmacopoeia; Revised updating mechanism for the section on radiopharmaceuticals in The International Pharmacopoeia; Revision of the supplementary guidelines on good manufacturing practices: validation, Appendix 7: non-sterile process validation; General guidance for inspectors on hold-time studies; 16 technical supplements to Model guidance for the storage and transport of time- and temperature-sensitive pharmaceutical products; Recommendations for quality requirements when plant-derived artemisinin is used as a starting material in the production of antimalarial active pharmaceutical ingredients; Multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability: revision; Guidance on the selection of comparator pharmaceutical products for equivalence assessment of interchangeable multisource (generic) products: revision; and Good review practices: guidelines for national and regional regulatory authorities.
Miranda, Elaine Silva; Pinto, Cláudia Du Bocage Santos; dos Reis, André Luis de Almeida; Emmerick, Isabel Cristina Martins; Campos, Mônica Rodrigues; Luiza, Vera Lucia; Osorio-de-Castro, Claudia Garcia Serpa
2009-10-01
A study to identify availability and prices of medicines, according to type of provider, was conducted in the five regions of Brazil. A list of medicines to treat prevalent diseases was investigated, using the medicines price methodology developed by the World Health Organization and Health Action International, adapted for Brazil. In the public sector, bioequivalent (vis-à-vis reference brand) generics are less available than multisource products. For most medicines (71.4%), the availability of bioequivalent generics was less than 10%. In the private sector, the average number of different bioequivalent generic versions in the outlets was far smaller than the number of versions on the market. There was a positive correlation between the number of generics on the market, or those found at outlets, and the price variation in bioequivalent generic products, in relation to the maximum consumer price. It is estimated that price competition is occurring among bioequivalent generic drugs and between them and multisource products for the same substance, but not with reference brands.
Multi-Target Tracking via Mixed Integer Optimization
2016-05-13
solving these two problems separately, however few algorithms attempt to solve these simultaneously and even fewer utilize optimization. In this paper we...introduce a new mixed integer optimization (MIO) model which solves the data association and trajectory estimation problems simultaneously by minimizing...Kalman filter [5], which updates the trajectory estimates before the algorithm progresses forward to the next scan. This process repeats sequentially
Piening, Erk P; Baluch, Alina M; Salge, Torsten Oliver
2013-11-01
Given the limited understanding of temporal issues in extant theorizing about the link between human resource management (HRM) and performance, in this study we aim to shed light on how, when, and why HR interventions affect organizational performance. On the basis of longitudinal, multi-informant and multisource data from public hospital services in England, we provide new insights into the complex interplay between employees' perceptions of HR systems, job satisfaction, and performance outcomes over time. The dynamic panel data analyses provide support for changes in employees' experience of an HR system being related to subsequent changes in customer satisfaction, as mediated by changes in job satisfaction, albeit these effects decrease over time. Moreover, our longitudinal analyses highlight the importance of feedback effects in the HRM-performance chain, which otherwise appears to evolve in a cyclical manner. (c) 2013 APA, all rights reserved.
Multitarget sensing of glucose and cholesterol based on Janus hydrogel microparticles.
Sun, Xiao-Ting; Zhang, Ying; Zheng, Dong-Hua; Yue, Shuai; Yang, Chun-Guang; Xu, Zhang-Run
2017-06-15
A visualized sensing method for glucose and cholesterol was developed based on the hemispheres of the same Janus hydrogel microparticles. Single-phase and Janus hydrogel microparticles were both generated using a centrifugal microfluidic chip. For glucose sensing, concanavalin A and fluorescein labeled dextran used for competitive binding assay were encapsulated in alginate microparticles, and the fluorescence of the microparticles was positively correlated with glucose concentration. For cholesterol sensing, the microparticles embedded with γ-Fe 2 O 3 nanoparticles were used as catalyst for the oxidation of 3,3',5,5'-Tetramethylbenzidine by H 2 O 2 , an enzymatic hydrolysis product of cholesterol. And the color transition was more sensitive in the microparticles than in solutions, indicating the microparticles are more applicable for visualized determination. Furthermore, Janus microparticles were employed for multitarget sensing in the two hemespheres, and glucose and cholesterol were detected within the same microparticles without obvious interference. Besides, the particles could be manipulated by an external magnetic field. The glucose and cholesterol levels were measured in human serum utilizing the microparticles, which confirmed the potential application of the microparticles in real sample detection. Copyright © 2017 Elsevier B.V. All rights reserved.
Sestito, Simona; Nesi, Giulia; Daniele, Simona; Martelli, Alma; Digiacomo, Maria; Borghini, Alice; Pietra, Daniele; Calderone, Vincenzo; Lapucci, Annalina; Falasca, Marco; Parrella, Paola; Notarangelo, Angelantonio; Breschi, Maria C; Macchia, Marco; Martini, Claudia; Rapposelli, Simona
2015-11-13
Aggressive behavior and diffuse infiltrative growth are the main features of Glioblastoma multiforme (GBM), together with the high degree of resistance and recurrence. Evidence indicate that GBM-derived stem cells (GSCs), endowed with unlimited proliferative potential, play a critical role in tumor development and maintenance. Among the many signaling pathways involved in maintaining GSC stemness, tumorigenic potential, and anti-apoptotic properties, the PDK1/Akt pathway is a challenging target to develop new potential agents able to affect GBM resistance to chemotherapy. In an effort to find new PDK1/Akt inhibitors, we rationally designed and synthesized a small family of 2-oxindole derivatives. Among them, compound 3 inhibited PDK1 kinase and downstream effectors such as CHK1, GS3Kα and GS3Kβ, which contribute to GCS survival. Compound 3 appeared to be a good tool for studying the role of the PDK1/Akt pathway in GCS self-renewal and tumorigenicity, and might represent the starting point for the development of more potent and focused multi-target therapies for GBM. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets
NASA Astrophysics Data System (ADS)
Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter
2017-06-01
The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.
Ozadali-Sari, Keriman; Tüylü Küçükkılınç, Tuba; Ayazgok, Beyza; Balkan, Ayla; Unsal-Tan, Oya
2017-06-01
The present study describes the synthesis, pharmacological evaluation (BChE/AChE inhibition, Aβ antiaggregation, and neuroprotective effects), and molecular modeling studies of novel 2-[4-(4-substitutedpiperazin-1-yl)phenyl]benzimidazole derivatives. The alkyl-substituted derivatives exhibited selective inhibition on BChE with varying efficiency. Compounds 3b and 3d were found to be the most potent inhibitors of BChE with IC 50 values of 5.18 and 5.22μM, respectively. The kinetic studies revealed that 3b is a partial non-competitive BChE inhibitor. Molecular modeling studies also showed that the alkyl-substituted derivatives were able to reach the catalytic anionic site of the BChE. The compounds with an inhibitory effect on BChE were subsequently screened for their Aβ antiaggregating and neuroprotective activities. Compounds 3a and 3b exerted a potential neuroprotective effect against H 2 O 2 and Aβ-induced cytotoxicity in SH-SY5Y cells. Collectively, 3b was found as the most promising compound for the development of multi-target directed ligands against Alzheimer's disease. Copyright © 2017 Elsevier Inc. All rights reserved.
Makhaeva, Galina F; Lushchekina, Sofya V; Boltneva, Natalia P; Sokolov, Vladimir B; Grigoriev, Vladimir V; Serebryakova, Olga G; Vikhareva, Ekaterina A; Aksinenko, Alexey Yu; Barreto, George E; Aliev, Gjumrakch; Bachurin, Sergey O
2015-08-18
Alzheimer disease is a multifactorial pathology and the development of new multitarget neuroprotective drugs is promising and attractive. We synthesized a group of original compounds, which combine in one molecule γ-carboline fragment of dimebon and phenothiazine core of methylene blue (MB) linked by 1-oxo- and 2-hydroxypropylene spacers. Inhibitory activity of the conjugates toward acetylcholinesterase (AChE), butyrylcholinesterase (BChE) and structurally close to them carboxylesterase (CaE), as well their binding to NMDA-receptors were evaluated in vitro and in silico. These newly synthesized compounds showed significantly higher inhibitory activity toward BChE with IC50 values in submicromolar and micromolar range and exhibited selective inhibitory action against BChE over AChE and CaE. Kinetic studies for the 9 most active compounds indicated that majority of them were mixed-type BChE inhibitors. The main specific protein-ligand interaction is π-π stacking of phenothiazine ring with indole group of Trp82. These compounds emerge as promising safe multitarget ligands for the further development of a therapeutic approach against aging-related neurodegenerative disorders such as Alzheimer and/or other pathological conditions.
Poornima, Paramasivan; Kumar, Jothi Dinesh; Zhao, Qiaoli; Blunder, Martina; Efferth, Thomas
2016-09-01
Despite massive investments in drug research and development, the significant decline in the number of new drugs approved or translated to clinical use raises the question, whether single targeted drug discovery is the right approach. To combat complex systemic diseases that harbour robust biological networks such as cancer, single target intervention is proved to be ineffective. In such cases, network pharmacology approaches are highly useful, because they differ from conventional drug discovery by addressing the ability of drugs to target numerous proteins or networks involved in a disease. Pleiotropic natural products are one of the promising strategies due to their multi-targeting and due to lower side effects. In this review, we discuss the application of network pharmacology for cancer drug discovery. We provide an overview of the current state of knowledge on network pharmacology, focus on different technical approaches and implications for cancer therapy (e.g. polypharmacology and synthetic lethality), and illustrate the therapeutic potential with selected examples green tea polyphenolics, Eleutherococcus senticosus, Rhodiola rosea, and Schisandra chinensis). Finally, we present future perspectives on their plausible applications for diagnosis and therapy of cancer. Copyright © 2016 Elsevier Ltd. All rights reserved.
Elmasri, Wael A; Zhu, Rui; Peng, Wenjing; Al-Hariri, Moustafa; Kobeissy, Firas; Tran, Phat; Hamood, Abdul N; Hegazy, Mohamed F; Paré, Paul W; Mechref, Yehia
2017-07-07
Growth inhibition of the pathogen Staphylococcus aureus with currently available antibiotics is problematic in part due to bacterial biofilm protection. Although recently characterized natural products, including 3',4',5-trihydroxy-6,7-dimethoxy-flavone [1], 3',4',5,6,7-pentahydroxy-flavone [2], and 5-hydroxy-4',7-dimethoxy-flavone [3], exhibit both antibiotic and biofilm inhibitory activities, the mode of action of such hydroxylated flavonoids with respect to S. aureus inhibition is yet to be characterized. Enzymatic digestion and high-resolution MS analysis of differentially expressed proteins from S. aureus with and without exposure to antibiotic flavonoids (1-3) allowed for the characterization of global protein alterations induced by metabolite treatment. A total of 56, 92, and 110 proteins were differentially expressed with bacterial exposure to 1, 2, or 3, respectively. The connectivity of the identified proteins was characterized using a search tool for the retrieval of interacting genes/proteins (STRING) with multitargeted S. aureus inhibition of energy metabolism and biosynthesis by the assayed flavonoids. Identifying the mode of action of natural products as antibacterial agents is expected to provide insight into the potential use of flavonoids alone or in combination with known therapeutic agents to effectively control S. aureus infection.
NASA Astrophysics Data System (ADS)
Kabir, Md. Zahirul; Tee, Wei-Ven; Mohamad, Saharuddin B.; Alias, Zazali; Tayyab, Saad
2017-06-01
Binding studies between a multi-targeted anticancer drug, sunitinib (SU) and human serum albumin (HSA) were made using fluorescence, UV-vis absorption, circular dichroism (CD) and molecular docking analysis. Both fluorescence quenching data and UV-vis absorption results suggested formation of SU-HSA complex. Moderate binding affinity between SU and HSA was evident from the value of the binding constant (3.04 × 104 M-1), obtained at 298 K. Involvement of hydrophobic interactions and hydrogen bonds as the leading intermolecular forces in the formation of SU-HSA complex was predicted from the thermodynamic data of the binding reaction. These results were in good agreement with the molecular docking analysis. Microenvironmental perturbations around Tyr and Trp residues as well as secondary and tertiary structural changes in HSA upon SU binding were evident from the three-dimensional fluorescence and circular dichroism results. SU binding to HSA also improved the thermal stability of the protein. Competitive displacement results and molecular docking analysis revealed the binding locus of SU to HSA in subdomain IIA (Sudlow's site I). The influence of a few common ions on the binding constant of SU-HSA complex was also noticed.
A sequential multi-target Mps1 phosphorylation cascade promotes spindle checkpoint signaling
Ji, Zhejian; Gao, Haishan; Jia, Luying; Li, Bing; Yu, Hongtao
2017-01-01
The master spindle checkpoint kinase Mps1 senses kinetochore-microtubule attachment and promotes checkpoint signaling to ensure accurate chromosome segregation. The kinetochore scaffold Knl1, when phosphorylated by Mps1, recruits checkpoint complexes Bub1–Bub3 and BubR1–Bub3 to unattached kinetochores. Active checkpoint signaling ultimately enhances the assembly of the mitotic checkpoint complex (MCC) consisting of BubR1–Bub3, Mad2, and Cdc20, which inhibits the anaphase-promoting complex or cyclosome bound to Cdc20 (APC/CCdc20) to delay anaphase onset. Using in vitro reconstitution, we show that Mps1 promotes APC/C inhibition by MCC components through phosphorylating Bub1 and Mad1. Phosphorylated Bub1 binds to Mad1–Mad2. Phosphorylated Mad1 directly interacts with Cdc20. Mutations of Mps1 phosphorylation sites in Bub1 or Mad1 abrogate the spindle checkpoint in human cells. Therefore, Mps1 promotes checkpoint activation through sequentially phosphorylating Knl1, Bub1, and Mad1. This sequential multi-target phosphorylation cascade makes the checkpoint highly responsive to Mps1 and to kinetochore-microtubule attachment. DOI: http://dx.doi.org/10.7554/eLife.22513.001 PMID:28072388