History, rare, and multiple events of mechanical unfolding of repeat proteins
NASA Astrophysics Data System (ADS)
Sumbul, Fidan; Marchesi, Arin; Rico, Felix
2018-03-01
Mechanical unfolding of proteins consisting of repeat domains is an excellent tool to obtain large statistics. Force spectroscopy experiments using atomic force microscopy on proteins presenting multiple domains have revealed that unfolding forces depend on the number of folded domains (history) and have reported intermediate states and rare events. However, the common use of unspecific attachment approaches to pull the protein of interest holds important limitations to study unfolding history and may lead to discarding rare and multiple probing events due to the presence of unspecific adhesion and uncertainty on the pulling site. Site-specific methods that have recently emerged minimize this uncertainty and would be excellent tools to probe unfolding history and rare events. However, detailed characterization of these approaches is required to identify their advantages and limitations. Here, we characterize a site-specific binding approach based on the ultrastable complex dockerin/cohesin III revealing its advantages and limitations to assess the unfolding history and to investigate rare and multiple events during the unfolding of repeated domains. We show that this approach is more robust, reproducible, and provides larger statistics than conventional unspecific methods. We show that the method is optimal to reveal the history of unfolding from the very first domain and to detect rare events, while being more limited to assess intermediate states. Finally, we quantify the forces required to unfold two molecules pulled in parallel, difficult when using unspecific approaches. The proposed method represents a step forward toward more reproducible measurements to probe protein unfolding history and opens the door to systematic probing of rare and multiple molecule unfolding mechanisms.
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
Australian Public Universities: Are They Practising a Corporate Approach to Governance?
ERIC Educational Resources Information Center
Christopher, Joseph
2014-01-01
This article draws on the multi-theoretical approach to governance and a qualitative research method to examine the extent to which the corporate approach is practised in Australian public universities. The findings reveal that in meeting the needs of multiple stakeholders, universities are faced with a number of structural, legalistic, and…
The Form and Flow of Teaching Ethnographic Knowledge: Hands-On Approaches for Learning Epistemology
ERIC Educational Resources Information Center
Corte, Ugo; Irwin, Katherine
2017-01-01
A glance across ethnographic methods terrain reveals multiple controversies and divisive critiques. When training graduate students, these debates and controversies can be consequential. We offer suggestions for teaching graduate ethnographic methods courses that, first, help students understand some of the common epistemological debates in the…
Richardson, Miles
2017-04-01
In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
An evaluation of exact methods for the multiple subset maximum cardinality selection problem.
Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas
2016-05-01
The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.
Laine, Elodie; Carbone, Alessandra
2015-01-01
Protein-protein interactions (PPIs) are essential to all biological processes and they represent increasingly important therapeutic targets. Here, we present a new method for accurately predicting protein-protein interfaces, understanding their properties, origins and binding to multiple partners. Contrary to machine learning approaches, our method combines in a rational and very straightforward way three sequence- and structure-based descriptors of protein residues: evolutionary conservation, physico-chemical properties and local geometry. The implemented strategy yields very precise predictions for a wide range of protein-protein interfaces and discriminates them from small-molecule binding sites. Beyond its predictive power, the approach permits to dissect interaction surfaces and unravel their complexity. We show how the analysis of the predicted patches can foster new strategies for PPIs modulation and interaction surface redesign. The approach is implemented in JET2, an automated tool based on the Joint Evolutionary Trees (JET) method for sequence-based protein interface prediction. JET2 is freely available at www.lcqb.upmc.fr/JET2. PMID:26690684
Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao
2018-05-01
Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong
2011-01-21
Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.
In Australia: Multiple Intelligences in Multiple Settings.
ERIC Educational Resources Information Center
Vialle, Wilma
1997-01-01
In Australia, Gardner's multiple-intelligences theory has strongly influenced primary, preschool, and special education. A survey of 30 schools revealed that teachers use two basic approaches: teaching to, and teaching through, multiple intelligences. The first approach might develop children's music skills via playing an instrument. The second…
Reconstruction From Multiple Particles for 3D Isotropic Resolution in Fluorescence Microscopy.
Fortun, Denis; Guichard, Paul; Hamel, Virginie; Sorzano, Carlos Oscar S; Banterle, Niccolo; Gonczy, Pierre; Unser, Michael
2018-05-01
The imaging of proteins within macromolecular complexes has been limited by the low axial resolution of optical microscopes. To overcome this problem, we propose a novel computational reconstruction method that yields isotropic resolution in fluorescence imaging. The guiding principle is to reconstruct a single volume from the observations of multiple rotated particles. Our new operational framework detects particles, estimates their orientation, and reconstructs the final volume. The main challenge comes from the absence of initial template and a priori knowledge about the orientations. We formulate the estimation as a blind inverse problem, and propose a block-coordinate stochastic approach to solve the associated non-convex optimization problem. The reconstruction is performed jointly in multiple channels. We demonstrate that our method is able to reconstruct volumes with 3D isotropic resolution on simulated data. We also perform isotropic reconstructions from real experimental data of doubly labeled purified human centrioles. Our approach revealed the precise localization of the centriolar protein Cep63 around the centriole microtubule barrel. Overall, our method offers new perspectives for applications in biology that require the isotropic mapping of proteins within macromolecular assemblies.
Is multiple-sequence alignment required for accurate inference of phylogeny?
Höhl, Michael; Ragan, Mark A
2007-04-01
The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.
Mollah, Mohammad Manir Hossain; Jamal, Rahman; Mokhtar, Norfilza Mohd; Harun, Roslan; Mollah, Md. Nurul Haque
2015-01-01
Background Identifying genes that are differentially expressed (DE) between two or more conditions with multiple patterns of expression is one of the primary objectives of gene expression data analysis. Several statistical approaches, including one-way analysis of variance (ANOVA), are used to identify DE genes. However, most of these methods provide misleading results for two or more conditions with multiple patterns of expression in the presence of outlying genes. In this paper, an attempt is made to develop a hybrid one-way ANOVA approach that unifies the robustness and efficiency of estimation using the minimum β-divergence method to overcome some problems that arise in the existing robust methods for both small- and large-sample cases with multiple patterns of expression. Results The proposed method relies on a β-weight function, which produces values between 0 and 1. The β-weight function with β = 0.2 is used as a measure of outlier detection. It assigns smaller weights (≥ 0) to outlying expressions and larger weights (≤ 1) to typical expressions. The distribution of the β-weights is used to calculate the cut-off point, which is compared to the observed β-weight of an expression to determine whether that gene expression is an outlier. This weight function plays a key role in unifying the robustness and efficiency of estimation in one-way ANOVA. Conclusion Analyses of simulated gene expression profiles revealed that all eight methods (ANOVA, SAM, LIMMA, EBarrays, eLNN, KW, robust BetaEB and proposed) perform almost identically for m = 2 conditions in the absence of outliers. However, the robust BetaEB method and the proposed method exhibited considerably better performance than the other six methods in the presence of outliers. In this case, the BetaEB method exhibited slightly better performance than the proposed method for the small-sample cases, but the the proposed method exhibited much better performance than the BetaEB method for both the small- and large-sample cases in the presence of more than 50% outlying genes. The proposed method also exhibited better performance than the other methods for m > 2 conditions with multiple patterns of expression, where the BetaEB was not extended for this condition. Therefore, the proposed approach would be more suitable and reliable on average for the identification of DE genes between two or more conditions with multiple patterns of expression. PMID:26413858
Elucidating high-dimensional cancer hallmark annotation via enriched ontology.
Yan, Shankai; Wong, Ka-Chun
2017-09-01
Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.
Remote Sensing of Cloud Top Heights Using the Research Scanning Polarimeter
NASA Technical Reports Server (NTRS)
Sinclair, Kenneth; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John; Wasilewski, Andrzej
2015-01-01
Clouds cover roughly two thirds of the globe and act as an important regulator of Earth's radiation budget. Of these, multilayered clouds occur about half of the time and are predominantly two-layered. Changes in cloud top height (CTH) have been predicted by models to have a globally averaged positive feedback, however observational changes in CTH have shown uncertain results. Additional CTH observations are necessary to better and quantify the effect. Improved CTH observations will also allow for improved sub-grid parameterizations in large-scale models and accurate CTH information is important when studying variations in freezing point and cloud microphysics. NASA's airborne Research Scanning Polarimeter (RSP) is able to measure cloud top height using a novel multi-angular contrast approach. RSP scans along the aircraft track and obtains measurements at 152 viewing angles at any aircraft location. The approach presented here aggregates measurements from multiple scans to a single location at cloud altitude using a correlation function designed to identify the location-distinct features in each scan. During NASAs SEAC4RS air campaign, the RSP was mounted on the ER-2 aircraft along with the Cloud Physics Lidar (CPL), which made simultaneous measurements of CTH. The RSPs unique method of determining CTH is presented. The capabilities of using single and combinations of channels within the approach are investigated. A detailed comparison of RSP retrieved CTHs with those of CPL reveal the accuracy of the approach. Results indicate a strong ability for the RSP to accurately identify cloud heights. Interestingly, the analysis reveals an ability for the approach to identify multiple cloud layers in a single scene and estimate the CTH of each layer. Capabilities and limitations of identifying single and multiple cloud layers heights are explored. Special focus is given to sources of error in the method including optically thin clouds, physically thick clouds, multi-layered clouds as well as cloud phase. When determining multi-layered CTHs, limits on the upper clouds opacity are assessed.
Single-cell multimodal profiling reveals cellular epigenetic heterogeneity.
Cheow, Lih Feng; Courtois, Elise T; Tan, Yuliana; Viswanathan, Ramya; Xing, Qiaorui; Tan, Rui Zhen; Tan, Daniel S W; Robson, Paul; Loh, Yuin-Han; Quake, Stephen R; Burkholder, William F
2016-10-01
Sample heterogeneity often masks DNA methylation signatures in subpopulations of cells. Here, we present a method to genotype single cells while simultaneously interrogating gene expression and DNA methylation at multiple loci. We used this targeted multimodal approach, implemented on an automated, high-throughput microfluidic platform, to assess primary lung adenocarcinomas and human fibroblasts undergoing reprogramming by profiling epigenetic variation among cell types identified through genotyping and transcriptional analysis.
Isolation with Migration Models for More Than Two Populations
Hey, Jody
2010-01-01
A method for studying the divergence of multiple closely related populations is described and assessed. The approach of Hey and Nielsen (2007, Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics. Proc Natl Acad Sci USA. 104:2785–2790) for fitting an isolation-with-migration model was extended to the case of multiple populations with a known phylogeny. Analysis of simulated data sets reveals the kinds of history that are accessible with a multipopulation analysis. Necessarily, processes associated with older time periods in a phylogeny are more difficult to estimate; and histories with high levels of gene flow are particularly difficult with more than two populations. However, for histories with modest levels of gene flow, or for very large data sets, it is possible to study large complex divergence problems that involve multiple closely related populations or species. PMID:19955477
Isolation with migration models for more than two populations.
Hey, Jody
2010-04-01
A method for studying the divergence of multiple closely related populations is described and assessed. The approach of Hey and Nielsen (2007, Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics. Proc Natl Acad Sci USA. 104:2785-2790) for fitting an isolation-with-migration model was extended to the case of multiple populations with a known phylogeny. Analysis of simulated data sets reveals the kinds of history that are accessible with a multipopulation analysis. Necessarily, processes associated with older time periods in a phylogeny are more difficult to estimate; and histories with high levels of gene flow are particularly difficult with more than two populations. However, for histories with modest levels of gene flow, or for very large data sets, it is possible to study large complex divergence problems that involve multiple closely related populations or species.
Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier
2017-07-15
In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.
Effective distances for epidemics spreading on complex networks.
Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M
2017-01-01
We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.
Effective distances for epidemics spreading on complex networks
NASA Astrophysics Data System (ADS)
Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M.
2017-01-01
We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.
Global Search Capabilities of Indirect Methods for Impulsive Transfers
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong
2015-09-01
An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.
Alternative Methods for Handling Attrition
Foster, E. Michael; Fang, Grace Y.
2009-01-01
Using data from the evaluation of the Fast Track intervention, this article illustrates three methods for handling attrition. Multiple imputation and ignorable maximum likelihood estimation produce estimates that are similar to those based on listwise-deleted data. A panel selection model that allows for selective dropout reveals that highly aggressive boys accumulate in the treatment group over time and produces a larger estimate of treatment effect. In contrast, this model produces a smaller treatment effect for girls. The article's conclusion discusses the strengths and weaknesses of the alternative approaches and outlines ways in which researchers might improve their handling of attrition. PMID:15358906
Multifractal characteristics of multiparticle production in heavy-ion collisions at SPS energies
NASA Astrophysics Data System (ADS)
Khan, Shaista; Ahmad, Shakeel
Entropy, dimensions and other multifractal characteristics of multiplicity distributions of relativistic charged hadrons produced in ion-ion collisions at SPS energies are investigated. The analysis of the experimental data is carried out in terms of phase space bin-size dependence of multiplicity distributions following the Takagi’s approach. Yet another method is also followed to study the multifractality which, is not related to the bin-width and (or) the detector resolution, rather involves multiplicity distribution of charged particles in full phase space in terms of information entropy and its generalization, Rényi’s order-q information entropy. The findings reveal the presence of multifractal structure — a remarkable property of the fluctuations. Nearly constant values of multifractal specific heat “c” estimated by the two different methods of analysis followed indicate that the parameter “c” may be used as a universal characteristic of the particle production in high energy collisions. The results obtained from the analysis of the experimental data agree well with the predictions of Monte Carlo model AMPT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao; Wan, Liang; Chen, Kai
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
Li, Yao; Wan, Liang; Chen, Kai
2015-04-25
An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less
Hoover, D R; Peng, Y; Saah, A J; Detels, R R; Day, R S; Phair, J P
A simple non-parametric approach is developed to simultaneously estimate net incidence and morbidity time from specific AIDS illnesses in populations at high risk for death from these illnesses and other causes. The disease-death process has four-stages that can be recast as two sandwiching three-state multiple decrement processes. Non-parametric estimation of net incidence and morbidity time with error bounds are achieved from these sandwiching models through modification of methods from Aalen and Greenwood, and bootstrapping. An application to immunosuppressed HIV-1 infected homosexual men reveals that cytomegalovirus disease, Kaposi's sarcoma and Pneumocystis pneumonia are likely to occur and cause significant morbidity time.
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
Bland, Andrew J; Tobbell, Jane
2015-11-01
Simulation has become an established feature of undergraduate nurse education and as such requires extensive investigation. Research limited to pre-constructed categories imposed by some questionnaire and interview methods may only provide partial understanding. This is problematic in understanding the mechanisms of learning in simulation-based education as contemporary distributed theories of learning posit that learning can be understood as the interaction of individual identity with context. This paper details a method of data collection and analysis that captures interaction of individuals within the simulation experience which can be analysed through multiple lenses, including context and through the lens of both researcher and learner. The study utilised a grounded theory approach involving 31 under-graduate third year student nurses. Data was collected and analysed through non-participant observation, digital recordings of simulation activity and focus group deconstruction of their recorded simulation by the participants and researcher. Focus group interviews enabled further clarification. The method revealed multiple levels of dynamic data, concluding that in order to better understand how students learn in social and active learning strategies, dynamic data is required enabling researchers and participants to unpack what is happening as it unfolds in action. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Barkauskiene, Rasa
2009-01-01
A person-oriented approach was used to examine the role of parenting in the associations between single learning disabilities and multiple learning disabilities and the adjustment difficulties in 8-11-year-olds. The results revealed that multiple, but not single, learning disabilities were associated with greater difficulties in emotional and…
NASA Astrophysics Data System (ADS)
Gulbahce, Natali; Yan, Han; Vidal, Marc; Barabasi, Albert-Laszlo
2010-03-01
Viral infections induce multiple perturbations that spread along the links of the biological networks of the host cells. Understanding the impact of these cascading perturbations requires an exhaustive knowledge of the cellular machinery as well as a systems biology approach that reveals how individual components of the cellular system function together. Here we describe an integrative method that provides a new approach to studying virus-human interactions and its correlations with diseases. Our method involves the combined utilization of protein - protein interactions, protein -- DNA interactions, metabolomics and gene - disease associations to build a ``viraldiseasome''. By solely using high-throughput data, we map well-known viral associated diseases and predict new candidate viral diseases. We use microarray data of virus-infected tissues and patient medical history data to further test the implications of the viral diseasome. We apply this method to Epstein-Barr virus and Human Papillomavirus and shed light into molecular development of viral diseases and disease pathways.
Resolution of structural heterogeneity in dynamic crystallography
Ren, Zhong; Chan, Peter W. Y.; Moffat, Keith; Pai, Emil F.; Royer, William E.; Šrajer, Vukica; Yang, Xiaojing
2013-01-01
Dynamic behavior of proteins is critical to their function. X-ray crystallography, a powerful yet mostly static technique, faces inherent challenges in acquiring dynamic information despite decades of effort. Dynamic ‘structural changes’ are often indirectly inferred from ‘structural differences’ by comparing related static structures. In contrast, the direct observation of dynamic structural changes requires the initiation of a biochemical reaction or process in a crystal. Both the direct and the indirect approaches share a common challenge in analysis: how to interpret the structural heterogeneity intrinsic to all dynamic processes. This paper presents a real-space approach to this challenge, in which a suite of analytical methods and tools to identify and refine the mixed structural species present in multiple crystallographic data sets have been developed. These methods have been applied to representative scenarios in dynamic crystallography, and reveal structural information that is otherwise difficult to interpret or inaccessible using conventional methods. PMID:23695239
Resolution of structural heterogeneity in dynamic crystallography.
Ren, Zhong; Chan, Peter W Y; Moffat, Keith; Pai, Emil F; Royer, William E; Šrajer, Vukica; Yang, Xiaojing
2013-06-01
Dynamic behavior of proteins is critical to their function. X-ray crystallography, a powerful yet mostly static technique, faces inherent challenges in acquiring dynamic information despite decades of effort. Dynamic `structural changes' are often indirectly inferred from `structural differences' by comparing related static structures. In contrast, the direct observation of dynamic structural changes requires the initiation of a biochemical reaction or process in a crystal. Both the direct and the indirect approaches share a common challenge in analysis: how to interpret the structural heterogeneity intrinsic to all dynamic processes. This paper presents a real-space approach to this challenge, in which a suite of analytical methods and tools to identify and refine the mixed structural species present in multiple crystallographic data sets have been developed. These methods have been applied to representative scenarios in dynamic crystallography, and reveal structural information that is otherwise difficult to interpret or inaccessible using conventional methods.
Medves, Jennifer; Godfrey, Christina; Turner, Carly; Paterson, Margo; Harrison, Margaret; MacKenzie, Lindsay; Durando, Paola
2010-06-01
To synthesis the literature relevant to guideline dissemination and implementation strategies for healthcare teams and team-based practice. Systematic approach utilising Joanna Briggs Institute methods. Two reviewers screened all articles and where there was disagreement, a third reviewer determined inclusion. Initial search revealed 12,083 of which 88 met the inclusion criteria. Ten dissemination and implementation strategies identified with distribution of educational materials the most common. Studies were assessed for patient or practitioner outcomes and changes in practice, knowledge and economic outcomes. A descriptive analysis revealed multiple approaches using teams of healthcare providers were reported to have statistically significant results in knowledge, practice and/or outcomes for 72.7% of the studies. Team-based care using practice guidelines locally adapted can affect positively patient and provider outcomes. © 2010 The Authors. Journal Compilation © Blackwell Publishing Asia Pty Ltd.
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-01-01
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = −0.002, mean squared error = 0.025; PTDM: bias = −1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995–2008). PMID:27455963
NASA Astrophysics Data System (ADS)
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-12-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.
Evaluating Gene Set Enrichment Analysis Via a Hybrid Data Model
Hua, Jianping; Bittner, Michael L.; Dougherty, Edward R.
2014-01-01
Gene set enrichment analysis (GSA) methods have been widely adopted by biological labs to analyze data and generate hypotheses for validation. Most of the existing comparison studies focus on whether the existing GSA methods can produce accurate P-values; however, practitioners are often more concerned with the correct gene-set ranking generated by the methods. The ranking performance is closely related to two critical goals associated with GSA methods: the ability to reveal biological themes and ensuring reproducibility, especially for small-sample studies. We have conducted a comprehensive simulation study focusing on the ranking performance of seven representative GSA methods. We overcome the limitation on the availability of real data sets by creating hybrid data models from existing large data sets. To build the data model, we pick a master gene from the data set to form the ground truth and artificially generate the phenotype labels. Multiple hybrid data models can be constructed from one data set and multiple data sets of smaller sizes can be generated by resampling the original data set. This approach enables us to generate a large batch of data sets to check the ranking performance of GSA methods. Our simulation study reveals that for the proposed data model, the Q2 type GSA methods have in general better performance than other GSA methods and the global test has the most robust results. The properties of a data set play a critical role in the performance. For the data sets with highly connected genes, all GSA methods suffer significantly in performance. PMID:24558298
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio
2017-07-01
This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.
Gehring, Tiago V.; Luksys, Gediminas; Sandi, Carmen; Vasilaki, Eleni
2015-01-01
The Morris Water Maze is a widely used task in studies of spatial learning with rodents. Classical performance measures of animals in the Morris Water Maze include the escape latency, and the cumulative distance to the platform. Other methods focus on classifying trajectory patterns to stereotypical classes representing different animal strategies. However, these approaches typically consider trajectories as a whole, and as a consequence they assign one full trajectory to one class, whereas animals often switch between these strategies, and their corresponding classes, within a single trial. To this end, we take a different approach: we look for segments of diverse animal behaviour within one trial and employ a semi-automated classification method for identifying the various strategies exhibited by the animals within a trial. Our method allows us to reveal significant and systematic differences in the exploration strategies of two animal groups (stressed, non-stressed), that would be unobserved by earlier methods. PMID:26423140
2015-01-01
Background Computer-aided drug design has a long history of being applied to discover new molecules to treat various cancers, but it has always been focused on single targets. The development of systems biology has let scientists reveal more hidden mechanisms of cancers, but attempts to apply systems biology to cancer therapies remain at preliminary stages. Our lab has successfully developed various systems biology models for several cancers. Based on these achievements, we present the first attempt to combine multiple-target therapy with systems biology. Methods In our previous study, we identified 28 significant proteins--i.e., common core network markers--of four types of cancers as house-keeping proteins of these cancers. In this study, we ranked these proteins by summing their carcinogenesis relevance values (CRVs) across the four cancers, and then performed docking and pharmacophore modeling to do virtual screening on the NCI database for anti-cancer drugs. We also performed pathway analysis on these proteins using Panther and MetaCore to reveal more mechanisms of these cancer house-keeping proteins. Results We designed several approaches to discover targets for multiple-target cocktail therapies. In the first one, we identified the top 20 drugs for each of the 28 cancer house-keeping proteins, and analyzed the docking pose to further understand the interaction mechanisms of these drugs. After screening for duplicates, we found that 13 of these drugs could target 11 proteins simultaneously. In the second approach, we chose the top 5 proteins with the highest summed CRVs and used them as the drug targets. We built a pharmacophore and applied it to do virtual screening against the Life-Chemical library for anti-cancer drugs. Based on these results, wet-lab bio-scientists could freely investigate combinations of these drugs for multiple-target therapy for cancers, in contrast to the traditional single target therapy. Conclusions Combination of systems biology with computer-aided drug design could help us develop novel drug cocktails with multiple targets. We believe this will enhance the efficiency of therapeutic practice and lead to new directions for cancer therapy. PMID:26680552
Multilevel joint competing risk models
NASA Astrophysics Data System (ADS)
Karunarathna, G. H. S.; Sooriyarachchi, M. R.
2017-09-01
Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).
Consistency of the Performance and Nonperformance Methods in Gifted Identification
ERIC Educational Resources Information Center
Acar, Selcuk; Sen, Sedat; Cayirdag, Nur
2016-01-01
Current approaches to gifted identification suggest collecting multiple sources of evidence. Some gifted identification guidelines allow for the interchangeable use of "performance" and "nonperformance" identification methods. This multiple criteria approach lacks a strong overlap between the assessment tools; however,…
Revealed Preference Methods for Studying Bicycle Route Choice-A Systematic Review.
Pritchard, Ray
2018-03-07
One fundamental aspect of promoting utilitarian bicycle use involves making modifications to the built environment to improve the safety, efficiency and enjoyability of cycling. Revealed preference data on bicycle route choice can assist greatly in understanding the actual behaviour of a highly heterogeneous group of users, which in turn assists the prioritisation of infrastructure or other built environment initiatives. This systematic review seeks to compare the relative strengths and weaknesses of the empirical approaches for evaluating whole journey route choices of bicyclists. Two electronic databases were systematically searched for a selection of keywords pertaining to bicycle and route choice. In total seven families of methods are identified: GPS devices, smartphone applications, crowdsourcing, participant-recalled routes, accompanied journeys, egocentric cameras and virtual reality. The study illustrates a trade-off in the quality of data obtainable and the average number of participants. Future additional methods could include dockless bikeshare, multiple camera solutions using computer vision and immersive bicycle simulator environments.
Revealed Preference Methods for Studying Bicycle Route Choice—A Systematic Review
2018-01-01
One fundamental aspect of promoting utilitarian bicycle use involves making modifications to the built environment to improve the safety, efficiency and enjoyability of cycling. Revealed preference data on bicycle route choice can assist greatly in understanding the actual behaviour of a highly heterogeneous group of users, which in turn assists the prioritisation of infrastructure or other built environment initiatives. This systematic review seeks to compare the relative strengths and weaknesses of the empirical approaches for evaluating whole journey route choices of bicyclists. Two electronic databases were systematically searched for a selection of keywords pertaining to bicycle and route choice. In total seven families of methods are identified: GPS devices, smartphone applications, crowdsourcing, participant-recalled routes, accompanied journeys, egocentric cameras and virtual reality. The study illustrates a trade-off in the quality of data obtainable and the average number of participants. Future additional methods could include dockless bikeshare, multiple camera solutions using computer vision and immersive bicycle simulator environments. PMID:29518991
Leslie, Daniel C; Melnikoff, Brett A; Marchiarullo, Daniel J; Cash, Devin R; Ferrance, Jerome P; Landers, James P
2010-08-07
Quality control of microdevices adds significant costs, in time and money, to any fabrication process. A simple, rapid quantitative method for the post-fabrication characterization of microchannel architecture using the measurement of flow with volumes relevant to microfluidics is presented. By measuring the mass of a dye solution passed through the device, it circumvents traditional gravimetric and interface-tracking methods that suffer from variable evaporation rates and the increased error associated with smaller volumes. The multiplexed fluidic resistance (MFR) measurement method measures flow via stable visible-wavelength dyes, a standard spectrophotometer and common laboratory glassware. Individual dyes are used as molecular markers of flow for individual channels, and in channel architectures where multiple channels terminate at a common reservoir, spectral deconvolution reveals the individual flow contributions. On-chip, this method was found to maintain accurate flow measurement at lower flow rates than the gravimetric approach. Multiple dyes are shown to allow for independent measurement of multiple flows on the same device simultaneously. We demonstrate that this technique is applicable for measuring the fluidic resistance, which is dependent on channel dimensions, in four fluidically connected channels simultaneously, ultimately determining that one chip was partially collapsed and, therefore, unusable for its intended purpose. This method is thus shown to be widely useful in troubleshooting microfluidic flow characteristics.
NASA Astrophysics Data System (ADS)
Zhang, Wenshuai; Zeng, Xiaoyan; Zhang, Li; Peng, Haiyan; Jiao, Yongjun; Zeng, Jun; Treutlein, Herbert R.
2013-06-01
In this work, we have developed a new approach to predict the epitopes of antigens that are recognized by a specific antibody. Our method is based on the "multiple copy simultaneous search" (MCSS) approach which identifies optimal locations of small chemical functional groups on the surfaces of the antibody, and identifying sequence patterns of peptides that can bind to the surface of the antibody. The identified sequence patterns are then used to search the amino-acid sequence of the antigen protein. The approach was validated by reproducing the binding epitope of HIV gp120 envelop glycoprotein for the human neutralizing antibody as revealed in the available crystal structure. Our method was then applied to predict the epitopes of two glycoproteins of a newly discovered bunyavirus recognized by an antibody named MAb 4-5. These predicted epitopes can be verified by experimental methods. We also discuss the involvement of different amino acids in the antigen-antibody recognition based on the distributions of MCSS minima of different functional groups.
Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features
NASA Astrophysics Data System (ADS)
Nguyen, Chuong H.; Karavas, George K.; Artemiadis, Panagiotis
2018-02-01
Objective. In this paper, we investigate the suitability of imagined speech for brain-computer interface (BCI) applications. Approach. A novel method based on covariance matrix descriptors, which lie in Riemannian manifold, and the relevance vector machines classifier is proposed. The method is applied on electroencephalographic (EEG) signals and tested in multiple subjects. Main results. The method is shown to outperform other approaches in the field with respect to accuracy and robustness. The algorithm is validated on various categories of speech, such as imagined pronunciation of vowels, short words and long words. The classification accuracy of our methodology is in all cases significantly above chance level, reaching a maximum of 70% for cases where we classify three words and 95% for cases of two words. Significance. The results reveal certain aspects that may affect the success of speech imagery classification from EEG signals, such as sound, meaning and word complexity. This can potentially extend the capability of utilizing speech imagery in future BCI applications. The dataset of speech imagery collected from total 15 subjects is also published.
Written content indicators of problematic approach behavior toward political officials.
Schoeneman, Katherine A; Scalora, Mario J; Darrow, Charles D; McLawsen, Julia E; Chang, Grace H; Zimmerman, William J
2011-01-01
Those charged with assessing and managing threatening communications must utilize risk factors that are behavioral, operational, and reasonably attainable during investigations. This project examined 326 written correspondence cases of an inappropriate, disruptive, or threatening nature that targeted political officials, with the specific goal of identifying written content indicators of problematic approach behavior. Results revealed that subjects who engaged in problematic approach activity toward their targets had more criminal history, past threat assessment activity, familiarity with firearms, past substance use, and indicators of serious mental illness. Approachers were more likely to engage in multiple contact methods, target dispersion, more overall contacts, and prior contact with their target. Numerous content themes were associated with future problematic approach, including longer handwritten correspondence, referencing specific events, making demands, mentioning stressors, focus on personal themes, feeling their rights were violated, and expressing an intention to approach. Harassing, insulting, and threatening language was not related to approach behavior. The implications of these findings are wide-ranging for the practice of threat assessment. Copyright © 2011 John Wiley & Sons, Ltd.
Optimizing Multiple QoS for Workflow Applications using PSO and Min-Max Strategy
NASA Astrophysics Data System (ADS)
Umar Ambursa, Faruku; Latip, Rohaya; Abdullah, Azizol; Subramaniam, Shamala
2017-08-01
Workflow scheduling under multiple QoS constraints is a complicated optimization problem. Metaheuristic techniques are excellent approaches used in dealing with such problem. Many metaheuristic based algorithms have been proposed, that considers various economic and trustworthy QoS dimensions. However, most of these approaches lead to high violation of user-defined QoS requirements in tight situation. Recently, a new Particle Swarm Optimization (PSO)-based QoS-aware workflow scheduling strategy (LAPSO) is proposed to improve performance in such situations. LAPSO algorithm is designed based on synergy between a violation handling method and a hybrid of PSO and min-max heuristic. Simulation results showed a great potential of LAPSO algorithm to handling user requirements even in tight situations. In this paper, the performance of the algorithm is anlysed further. Specifically, the impact of the min-max strategy on the performance of the algorithm is revealed. This is achieved by removing the violation handling from the operation of the algorithm. The results show that LAPSO based on only the min-max method still outperforms the benchmark, even though the LAPSO with the violation handling performs more significantly better.
An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh
2013-01-01
The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis. PMID:23300959
Kim, Jinkyu; Kim, Gunn; An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh
2013-01-01
The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis.
Multiple products monitoring as a robust approach for peptide quantification.
Baek, Je-Hyun; Kim, Hokeun; Shin, Byunghee; Yu, Myeong-Hee
2009-07-01
Quantification of target peptides and proteins is crucial for biomarker discovery. Approaches such as selected reaction monitoring (SRM) and multiple reaction monitoring (MRM) rely on liquid chromatography and mass spectrometric analysis of defined peptide product ions. These methods are not very widespread because the determination of quantifiable product ion using either SRM or MRM is a very time-consuming process. We developed a novel approach for quantifying target peptides without such an arduous process of ion selection. This method is based on monitoring multiple product ions (multiple products monitoring: MpM) from full-range MS2 spectra of a target precursor. The MpM method uses a scoring system that considers both the absolute intensities of product ions and the similarities between the query MS2 spectrum and the reference MS2 spectrum of the target peptide. Compared with conventional approaches, MpM greatly improves sensitivity and selectivity of peptide quantification using an ion-trap mass spectrometer.
Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine
2015-03-01
Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.
The Water-Energy-Food Nexus: Advancing Innovative, Policy-Relevant Methods
NASA Astrophysics Data System (ADS)
Crootof, A.; Albrecht, T.; Scott, C. A.
2017-12-01
The water-energy-food (WEF) nexus is rapidly expanding in scholarly literature and policy settings as a novel way to address complex Anthropocene challenges. The nexus approach aims to identify tradeoffs and synergies of water, energy, and food systems, internalize social and environmental impacts, and guide development of cross-sectoral policies. However, a primary limitation of the nexus approach is the absence - or gaps and inconsistent use - of adequate methods to advance an innovative and policy-relevant nexus approach. This paper presents an analytical framework to identify robust nexus methods that align with nexus thinking and highlights innovative nexus methods at the frontier. The current state of nexus methods was assessed with a systematic review of 245 journal articles and book chapters. This review revealed (a) use of specific and reproducible methods for nexus assessment is uncommon - less than one-third of the reviewed studies present explicit methods; (b) nexus methods frequently fall short of capturing interactions among water, energy, and food - the very concept they purport to address; (c) assessments strongly favor quantitative approaches - 70% use primarily quantitative tools; (d) use of social science methods is limited (26%); and (e) many nexus methods are confined to disciplinary silos - only about one-quarter combine methods from diverse disciplines and less than one-fifth utilize both quantitative and qualitative approaches. Despite some pitfalls of current nexus methods, there are a host of studies that offer innovative approaches to help quantify nexus linkages and interactions among sectors, conceptualize dynamic feedbacks, and support mixed method approaches to better understand WEF systems. Applying our analytical framework to all 245 studies, we identify, and analyze herein, seventeen studies that implement innovative multi-method and cross-scalar tools to demonstrate promising advances toward improved nexus assessment. This paper finds that, to make the WEF nexus effective as a policy-relevant analytical tool, methods are needed that incorporate social and political dimensions of water, energy, and food; utilize multiple and interdisciplinary approaches; and engage stakeholders and policy-makers.
Juxtaposed scripts, traits, and the dynamics of personality.
Thorne, A
1995-09-01
Although personality is theoretically composed of multiple facets that function in lively interrelatedness, the interplay among these multiplicities has mostly been missed by research that focuses on traits as the primary unit of personality. The juxtaposition of contrary interpersonal scripts is a promising way to capture dynamic processes of personality. A case study is used to illustrate the dynamic interplay between sociotropic (extraverted) and avoidant scripts. Whereas standard trait measures do not reveal how extraversion and avoidance co-relate in everyday experience, the dynamics are revealed by study of interpersonal scripts in narratives of memorable encounters. Similarities between the present approach and recent dialectical approaches to the self-concept are discussed (Hermans & Kempen, 1993). Such approaches, particularly when articulated so as to interface with more generalized units of personality, can be highly useful for advancing understanding of personality dynamics.
Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia
2014-03-01
Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.
The Testing Methods and Gender Differences in Multiple-Choice Assessment
NASA Astrophysics Data System (ADS)
Ng, Annie W. Y.; Chan, Alan H. S.
2009-10-01
This paper provides a comprehensive review of the multiple-choice assessment in the past two decades for facilitating people to conduct effective testing in various subject areas. It was revealed that a variety of multiple-choice test methods viz. conventional multiple-choice, liberal multiple-choice, elimination testing, confidence marking, probability testing, and order-of-preference scheme are available for use in assessing subjects' knowledge and decision ability. However, the best multiple-choice test method for use has not yet been identified. The review also indicated that the existence of gender differences in multiple-choice task performance might be due to the test area, instruction/scoring condition, and item difficulty.
NASA Astrophysics Data System (ADS)
Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian
2014-06-01
Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.
Simple F Test Reveals Gene-Gene Interactions in Case-Control Studies
Chen, Guanjie; Yuan, Ao; Zhou, Jie; Bentley, Amy R.; Adeyemo, Adebowale; Rotimi, Charles N.
2012-01-01
Missing heritability is still a challenge for Genome Wide Association Studies (GWAS). Gene-gene interactions may partially explain this residual genetic influence and contribute broadly to complex disease. To analyze the gene-gene interactions in case-control studies of complex disease, we propose a simple, non-parametric method that utilizes the F-statistic. This approach consists of three steps. First, we examine the joint distribution of a pair of SNPs in cases and controls separately. Second, an F-test is used to evaluate the ratio of dependence in cases to that of controls. Finally, results are adjusted for multiple tests. This method was used to evaluate gene-gene interactions that are associated with risk of Type 2 Diabetes among African Americans in the Howard University Family Study. We identified 18 gene-gene interactions (P < 0.0001). Compared with the commonly-used logistical regression method, we demonstrate that the F-ratio test is an efficient approach to measuring gene-gene interactions, especially for studies with limited sample size. PMID:22837643
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Compositional mining of multiple object API protocols through state abstraction.
Dai, Ziying; Mao, Xiaoguang; Lei, Yan; Qi, Yuhua; Wang, Rui; Gu, Bin
2013-01-01
API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments.
Compositional Mining of Multiple Object API Protocols through State Abstraction
Mao, Xiaoguang; Qi, Yuhua; Wang, Rui; Gu, Bin
2013-01-01
API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments. PMID:23844378
Middleton, David A; Hughes, Eleri; Madine, Jillian
2004-08-11
We describe an NMR approach for detecting the interactions between phospholipid membranes and proteins, peptides, or small molecules. First, 1H-13C dipolar coupling profiles are obtained from hydrated lipid samples at natural isotope abundance using cross-polarization magic-angle spinning NMR methods. Principal component analysis of dipolar coupling profiles for synthetic lipid membranes in the presence of a range of biologically active additives reveals clusters that relate to different modes of interaction of the additives with the lipid bilayer. Finally, by representing profiles from multiple samples in the form of contour plots, it is possible to reveal statistically significant changes in dipolar couplings, which reflect perturbations in the lipid molecules at the membrane surface or within the hydrophobic interior.
Yang, Mingxing; Li, Xiumin; Li, Zhibin; Ou, Zhimin; Liu, Ming; Liu, Suhuan; Li, Xuejun; Yang, Shuyu
2013-01-01
DNA microarray analysis is characterized by obtaining a large number of gene variables from a small number of observations. Cluster analysis is widely used to analyze DNA microarray data to make classification and diagnosis of disease. Because there are so many irrelevant and insignificant genes in a dataset, a feature selection approach must be employed in data analysis. The performance of cluster analysis of this high-throughput data depends on whether the feature selection approach chooses the most relevant genes associated with disease classes. Here we proposed a new method using multiple Orthogonal Partial Least Squares-Discriminant Analysis (mOPLS-DA) models and S-plots to select the most relevant genes to conduct three-class disease classification and prediction. We tested our method using Golub's leukemia microarray data. For three classes with subtypes, we proposed hierarchical orthogonal partial least squares-discriminant analysis (OPLS-DA) models and S-plots to select features for two main classes and their subtypes. For three classes in parallel, we employed three OPLS-DA models and S-plots to choose marker genes for each class. The power of feature selection to classify and predict three-class disease was evaluated using cluster analysis. Further, the general performance of our method was tested using four public datasets and compared with those of four other feature selection methods. The results revealed that our method effectively selected the most relevant features for disease classification and prediction, and its performance was better than that of the other methods.
The Water-Energy-Food Nexus: A systematic review of methods for nexus assessment
NASA Astrophysics Data System (ADS)
Albrecht, Tamee R.; Crootof, Arica; Scott, Christopher A.
2018-04-01
The water-energy-food (WEF) nexus is rapidly expanding in scholarly literature and policy settings as a novel way to address complex resource and development challenges. The nexus approach aims to identify tradeoffs and synergies of water, energy, and food systems, internalize social and environmental impacts, and guide development of cross-sectoral policies. However, while the WEF nexus offers a promising conceptual approach, the use of WEF nexus methods to systematically evaluate water, energy, and food interlinkages or support development of socially and politically-relevant resource policies has been limited. This paper reviews WEF nexus methods to provide a knowledge base of existing approaches and promote further development of analytical methods that align with nexus thinking. The systematic review of 245 journal articles and book chapters reveals that (a) use of specific and reproducible methods for nexus assessment is uncommon (less than one-third); (b) nexus methods frequently fall short of capturing interactions among water, energy, and food—the very linkages they conceptually purport to address; (c) assessments strongly favor quantitative approaches (nearly three-quarters); (d) use of social science methods is limited (approximately one-quarter); and (e) many nexus methods are confined to disciplinary silos—only about one-quarter combine methods from diverse disciplines and less than one-fifth utilize both quantitative and qualitative approaches. To help overcome these limitations, we derive four key features of nexus analytical tools and methods—innovation, context, collaboration, and implementation—from the literature that reflect WEF nexus thinking. By evaluating existing nexus analytical approaches based on these features, we highlight 18 studies that demonstrate promising advances to guide future research. This paper finds that to address complex resource and development challenges, mixed-methods and transdisciplinary approaches are needed that incorporate social and political dimensions of water, energy, and food; utilize multiple and interdisciplinary approaches; and engage stakeholders and decision-makers.
2012-01-01
Background Mitochondrial diseases comprise a diverse set of clinical disorders that affect multiple organ systems with varying severity and age of onset. Due to their clinical and genetic heterogeneity, these diseases are difficult to diagnose. We have developed a targeted exome sequencing approach to improve our ability to properly diagnose mitochondrial diseases and apply it here to an individual patient. Our method targets mitochondrial DNA (mtDNA) and the exons of 1,600 nuclear genes involved in mitochondrial biology or Mendelian disorders with multi-system phenotypes, thereby allowing for simultaneous evaluation of multiple disease loci. Case Presentation Targeted exome sequencing was performed on a patient initially suspected to have a mitochondrial disorder. The patient presented with diabetes mellitus, diffuse brain atrophy, autonomic neuropathy, optic nerve atrophy, and a severe amnestic syndrome. Further work-up revealed multiple heteroplasmic mtDNA deletions as well as profound thiamine deficiency without a clear nutritional cause. Targeted exome sequencing revealed a homozygous c.1672C > T (p.R558C) missense mutation in exon 8 of WFS1 that has previously been reported in a patient with Wolfram syndrome. Conclusion This case demonstrates how clinical application of next-generation sequencing technology can enhance the diagnosis of patients suspected to have rare genetic disorders. Furthermore, the finding of unexplained thiamine deficiency in a patient with Wolfram syndrome suggests a potential link between WFS1 biology and thiamine metabolism that has implications for the clinical management of Wolfram syndrome patients. PMID:22226368
Deconstructing Calculation Methods, Part 3: Multiplication
ERIC Educational Resources Information Center
Thompson, Ian
2008-01-01
In this third of a series of four articles, the author deconstructs the primary national strategy's approach to written multiplication. The approach to multiplication, as set out on pages 12 to 15 of the primary national strategy's "Guidance paper" "Calculation" (DfES, 2007), is divided into six stages: (1) mental…
Cervical Cancer Control for Hispanic Women in Texas: Effective Strategies from Research and Practice
Fernandez, Maria E.; Savas, Lara S.; Lipizzi, Erica; Smith, Jennifer S.; Vernon, Sally W.
2014-01-01
Purpose Hispanic women in Texas have among the highest rates of cervical cancer incidence and mortality in the country. Increasing regular Papanicolaou test screening and HPV vaccination are crucial to reduce the burden of cervical cancer among Hispanics. This paper presents lessons learned from community-based cervical cancer control programs in Texas and highlights effective intervention programs, methods and strategies. Methods We reviewed and summarized cervical cancer control efforts targeting Hispanic women in Texas, focusing on interventions developed by researchers at the University of Texas, School of Public Health. We identified commonalities across programs, highlighted effective methods, and summarized lessons learned to help guide future intervention efforts. Results Community-academic partnerships were fundamental in all steps of program development and implementation. Programs reviewed addressed psychosocial, cultural, and access barriers to cervical cancer control among low-income Hispanic women. Intervention approaches included lay health worker (LHW) and navigation models and used print media, interactive tailored media, photonovellas, client reminders, one-on-one and group education sessions. Conclusions Small media materials combined with LHW and navigation approaches were effective in delivering Pap test screening and HPV vaccination messages and in linking women to services. Common theoretical methods included in these approaches were modeling, verbal persuasion, and facilitating access. Adaptation of programs to an urban environment revealed that intensive navigation was needed to link women with multiple access barriers to health services. Collectively, this review reveals 1) the importance of using a systematic approach for planning and adapting cervical cancer control programs; 2) advantages of collaborative academic-community partnerships to develop feasible interventions with broad reach; 3) the use of small media and LHW approaches and the need for tailored phone navigation in urban settings; and 4) coordination and technical assistance of community-based efforts as a way to maximize resources. PMID:24398135
An efficient Bayesian meta-analysis approach for studying cross-phenotype genetic associations
Majumdar, Arunabha; Haldar, Tanushree; Bhattacharya, Sourabh; Witte, John S.
2018-01-01
Simultaneous analysis of genetic associations with multiple phenotypes may reveal shared genetic susceptibility across traits (pleiotropy). For a locus exhibiting overall pleiotropy, it is important to identify which specific traits underlie this association. We propose a Bayesian meta-analysis approach (termed CPBayes) that uses summary-level data across multiple phenotypes to simultaneously measure the evidence of aggregate-level pleiotropic association and estimate an optimal subset of traits associated with the risk locus. This method uses a unified Bayesian statistical framework based on a spike and slab prior. CPBayes performs a fully Bayesian analysis by employing the Markov Chain Monte Carlo (MCMC) technique Gibbs sampling. It takes into account heterogeneity in the size and direction of the genetic effects across traits. It can be applied to both cohort data and separate studies of multiple traits having overlapping or non-overlapping subjects. Simulations show that CPBayes can produce higher accuracy in the selection of associated traits underlying a pleiotropic signal than the subset-based meta-analysis ASSET. We used CPBayes to undertake a genome-wide pleiotropic association study of 22 traits in the large Kaiser GERA cohort and detected six independent pleiotropic loci associated with at least two phenotypes. This includes a locus at chromosomal region 1q24.2 which exhibits an association simultaneously with the risk of five different diseases: Dermatophytosis, Hemorrhoids, Iron Deficiency, Osteoporosis and Peripheral Vascular Disease. We provide an R-package ‘CPBayes’ implementing the proposed method. PMID:29432419
2012-01-01
Background Many marine meiofaunal species are reported to have wide distributions, which creates a paradox considering their hypothesized low dispersal abilities. Correlated with this paradox is an especially high taxonomic deficit for meiofauna, partly related to a lower taxonomic effort and partly to a high number of putative cryptic species. Molecular-based species delineation and barcoding approaches have been advocated for meiofaunal biodiversity assessments to speed up description processes and uncover cryptic lineages. However, these approaches show sensitivity to sampling coverage (taxonomic and geographic) and the success rate has never been explored on mesopsammic Mollusca. Results We collected the meiofaunal sea-slug Pontohedyle (Acochlidia, Heterobranchia) from 28 localities worldwide. With a traditional morphological approach, all specimens fall into two morphospecies. However, with a multi-marker genetic approach, we reveal multiple lineages that are reciprocally monophyletic on single and concatenated gene trees in phylogenetic analyses. These lineages are largely concordant with geographical and oceanographic parameters, leading to our primary species hypothesis (PSH). In parallel, we apply four independent methods of molecular based species delineation: General Mixed Yule Coalescent model (GMYC), statistical parsimony, Bayesian Species Delineation (BPP) and Automatic Barcode Gap Discovery (ABGD). The secondary species hypothesis (SSH) is gained by relying only on uncontradicted results of the different approaches (‘minimum consensus approach’), resulting in the discovery of a radiation of (at least) 12 mainly cryptic species, 9 of them new to science, some sympatric and some allopatric with respect to ocean boundaries. However, the meiofaunal paradox still persists in some Pontohedyle species identified here with wide coastal and trans-archipelago distributions. Conclusions Our study confirms extensive, morphologically cryptic diversity among meiofauna and accentuates the taxonomic deficit that characterizes meiofauna research. We observe for Pontohedyle slugs a high degree of morphological simplicity and uniformity, which we expect might be a general rule for meiofauna. To tackle cryptic diversity in little explored and hard-to-sample invertebrate taxa, at present, a combined approach seems most promising, such as multi-marker-barcoding (i.e., molecular systematics using mitochondrial and nuclear markers and the criterion of reciprocal monophyly) combined with a minimum consensus approach across independent methods of molecular species delineation to define candidate species. PMID:23244441
ERIC Educational Resources Information Center
Rimpiläinen, Sanna
2015-01-01
What do different research methods and approaches "do" in practice? The article seeks to discuss this point by drawing upon socio-material research approaches and empirical examples taken from the early stages of an extensive case study on an interdisciplinary project between two multidisciplinary fields of study, education and computer…
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-08-15
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = -0.002, mean squared error = 0.025; PTDM: bias = -1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995-2008). © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Alonso, Joan Francesc; Romero, Sergio; Mañanas, Miguel Ángel; Rojas, Mónica; Riba, Jordi; Barbanoj, Manel José
2015-10-01
The identification of the brain regions involved in the neuropharmacological action is a potential procedure for drug development. These regions are commonly determined by the voxels showing significant statistical differences after comparing placebo-induced effects with drug-elicited effects. LORETA is an electroencephalography (EEG) source imaging technique frequently used to identify brain structures affected by the drug. The aim of the present study was to evaluate different methods for the correction of multiple comparisons in the LORETA maps. These methods which have been commonly used in neuroimaging and also simulated studies have been applied on a real case of pharmaco-EEG study where the effects of increasing benzodiazepine doses on the central nervous system measured by LORETA were investigated. Data consisted of EEG recordings obtained from nine volunteers who received single oral doses of alprazolam 0.25, 0.5, and 1 mg, and placebo in a randomized crossover double-blind design. The identification of active regions was highly dependent on the selected multiple test correction procedure. The combined criteria approach known as cluster mass was useful to reveal that increasing drug doses led to higher intensity and spread of the pharmacologically induced changes in intracerebral current density.
Understanding genetics: Analysis of secondary students' conceptual status
NASA Astrophysics Data System (ADS)
Tsui, Chi-Yan; Treagust, David F.
2007-02-01
This article explores the conceptual change of students in Grades 10 and 12 in three Australian senior high schools when the teachers included computer multimedia to a greater or lesser extent in their teaching of a genetics course. The study, underpinned by a multidimensional conceptual-change framework, used an interpretive approach and a case-based design with multiple data collection methods. Over 4-8 weeks, the students learned genetics in classroom lessons that included BioLogica activities, which feature multiple representations. Results of the online tests and interview tasks revealed that most students improved their understanding of genetics as evidenced in the development of genetics reasoning. However, using Thorley's (1990) status analysis categories, a cross-case analysis of the gene conceptions of 9 of the 26 students interviewed indicated that only 4 students' postinstructional conceptions were intelligible-plausible-fruitful. Students' conceptual change was consistent with classroom teaching and learning. Findings suggested that multiple representations supported conceptual understanding of genetics but not in all students. It was also shown that status can be a viable hallmark enabling researchers to identify students' conceptual change that would otherwise be less accessible. Thorley's method for analyzing conceptual status is discussed.
Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P
2015-07-30
A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Systems and precision medicine approaches to diabetes heterogeneity: a Big Data perspective.
Capobianco, Enrico
2017-12-01
Big Data, and in particular Electronic Health Records, provide the medical community with a great opportunity to analyze multiple pathological conditions at an unprecedented depth for many complex diseases, including diabetes. How can we infer on diabetes from large heterogeneous datasets? A possible solution is provided by invoking next-generation computational methods and data analytics tools within systems medicine approaches. By deciphering the multi-faceted complexity of biological systems, the potential of emerging diagnostic tools and therapeutic functions can be ultimately revealed. In diabetes, a multidimensional approach to data analysis is needed to better understand the disease conditions, trajectories and the associated comorbidities. Elucidation of multidimensionality comes from the analysis of factors such as disease phenotypes, marker types, and biological motifs while seeking to make use of multiple levels of information including genetics, omics, clinical data, and environmental and lifestyle factors. Examining the synergy between multiple dimensions represents a challenge. In such regard, the role of Big Data fuels the rise of Precision Medicine by allowing an increasing number of descriptions to be captured from individuals. Thus, data curations and analyses should be designed to deliver highly accurate predicted risk profiles and treatment recommendations. It is important to establish linkages between systems and precision medicine in order to translate their principles into clinical practice. Equivalently, to realize their full potential, the involved multiple dimensions must be able to process information ensuring inter-exchange, reducing ambiguities and redundancies, and ultimately improving health care solutions by introducing clinical decision support systems focused on reclassified phenotypes (or digital biomarkers) and community-driven patient stratifications.
Uddin, M B; Chow, C M; Su, S W
2018-03-26
Sleep apnea (SA), a common sleep disorder, can significantly decrease the quality of life, and is closely associated with major health risks such as cardiovascular disease, sudden death, depression, and hypertension. The normal diagnostic process of SA using polysomnography is costly and time consuming. In addition, the accuracy of different classification methods to detect SA varies with the use of different physiological signals. If an effective, reliable, and accurate classification method is developed, then the diagnosis of SA and its associated treatment will be time-efficient and economical. This study aims to systematically review the literature and present an overview of classification methods to detect SA using respiratory and oximetry signals and address the automated detection approach. Sixty-two included studies revealed the application of single and multiple signals (respiratory and oximetry) for the diagnosis of SA. Both airflow and oxygen saturation signals alone were effective in detecting SA in the case of binary decision-making, whereas multiple signals were good for multi-class detection. In addition, some machine learning methods were superior to the other classification methods for SA detection using respiratory and oximetry signals. To deal with the respiratory and oximetry signals, a good choice of classification method as well as the consideration of associated factors would result in high accuracy in the detection of SA. An accurate classification method should provide a high detection rate with an automated (independent of human action) analysis of respiratory and oximetry signals. Future high-quality automated studies using large samples of data from multiple patient groups or record batches are recommended.
A comparison of email versus letter threat contacts toward members of the United States Congress.
Schoeneman-Morris, Katherine A; Scalora, Mario J; Chang, Grace H; Zimmerman, William J; Garner, Yancey
2007-09-01
To better understand inappropriate correspondence sent to public officials, 301 letter cases and 99 email cases were randomly selected from the United States Capitol Police investigative case files and compared. Results indicate that letter writers were significantly more likely than emailers to exhibit indicators of serious mental illness (SMI), engage in target dispersion, use multiple methods of contact, and make a problematic approach toward their target. Emailers were significantly more likely than letter writers to focus on government concerns, use obscene language, and display disorganization in their writing. Also, letter writers tended to be significantly older, have more criminal history, and write longer communications. A multivariate model found that disorganization, SMI symptoms, problematic physical approach, and target dispersion significantly differentiated between the correspondence groups. The group differences illuminated by this study reveal that letter writers are engaging in behavior that is higher risk for problematic approach than are emailers.
Berlin, Kathryn; Kruger, Tina; Klenosky, David B
2018-01-01
This mixed-methods study compares active older women in different physically based leisure activities and explores the difference in subjective ratings of successful aging and quantifiable predictors of success. A survey was administered to 256 women, 60-92 years of age, engaged in a sports- or exercise-based activity. Quantitative data were analyzed through ANOVA and multiple regression. Qualitative data (n = 79) was analyzed using the approach associated with means-end theory. While participants quantitatively appeared similar in terms of successful aging, qualitative interviews revealed differences in activity motivation. Women involved in sports highlighted social/psychological benefits, while those involved in exercise-based activities stressed fitness outcomes.
Verification of road databases using multiple road models
NASA Astrophysics Data System (ADS)
Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian
2017-08-01
In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
High School Online: Pedagogy, Preferences, and Practices of Three Online Teachers
ERIC Educational Resources Information Center
Kerr, Shantia
2011-01-01
This multiple case study explores how three online, high school teachers used technological tools to create meaningful learning activities for their students. Findings reveal that teachers use a wide variety of tools and approaches to online learning. Tools are categorized as content, communication, and management tools. Approaches include…
Multiple Method Analysis of TiO2 Nanoparticle Uptake in Rice (Oryza sativa L.) Plants.
Deng, Yingqing; Petersen, Elijah J; Challis, Katie E; Rabb, Savelas A; Holbrook, R David; Ranville, James F; Nelson, Bryant C; Xing, Baoshan
2017-09-19
Understanding the translocation of nanoparticles (NPs) into plants is challenging because qualitative and quantitative methods are still being developed and the comparability of results among different methods is unclear. In this study, uptake of titanium dioxide NPs and larger bulk particles (BPs) in rice plant (Oryza sativa L.) tissues was evaluated using three orthogonal techniques: electron microscopy, single-particle inductively coupled plasma mass spectroscopy (spICP-MS) with two different plant digestion approaches, and total elemental analysis using ICP optical emission spectroscopy. In agreement with electron microscopy results, total elemental analysis of plants exposed to TiO 2 NPs and BPs at 5 and 50 mg/L concentrations revealed that TiO 2 NPs penetrated into the plant root and resulted in Ti accumulation in above ground tissues at a higher level compared to BPs. spICP-MS analyses revealed that the size distributions of internalized particles differed between the NPs and BPs with the NPs showing a distribution with smaller particles. Acid digestion resulted in higher particle numbers and the detection of a broader range of particle sizes than the enzymatic digestion approach, highlighting the need for development of robust plant digestion procedures for NP analysis. Overall, there was agreement among the three techniques regarding NP and BP penetration into rice plant roots and spICP-MS showed its unique contribution to provide size distribution information.
Kharroubi, Adel; Gargouri, Dorra; Baati, Houda; Azri, Chafai
2012-06-01
Concentrations of selected heavy metals (Cd, Pb, Zn, Cu, Mn, and Fe) in surface sediments from 66 sites in both northern and eastern Mediterranean Sea-Boughrara lagoon exchange areas (southeastern Tunisia) were studied in order to understand current metal contamination due to the urbanization and economic development of nearby several coastal regions of the Gulf of Gabès. Multiple approaches were applied for the sediment quality assessment. These approaches were based on GIS coupled with chemometric methods (enrichment factors, geoaccumulation index, principal component analysis, and cluster analysis). Enrichment factors and principal component analysis revealed two distinct groups of metals. The first group corresponded to Fe and Mn derived from natural sources, and the second group contained Cd, Pb, Zn, and Cu originated from man-made sources. For these latter metals, cluster analysis showed two distinct distributions in the selected areas. They were attributed to temporal and spatial variations of contaminant sources input. The geoaccumulation index (I (geo)) values explained that only Cd, Pb, and Cu can be considered as moderate to extreme pollutants in the studied sediments.
Assessing the chemical contamination dynamics in a mixed land use stream system.
Sonne, Anne Th; McKnight, Ursula S; Rønde, Vinni; Bjerg, Poul L
2017-11-15
Traditionally, the monitoring of streams for chemical and ecological status has been limited to surface water concentrations, where the dominant focus has been on general water quality and the risk for eutrophication. Mixed land use stream systems, comprising urban areas and agricultural production, are challenging to assess with multiple chemical stressors impacting stream corridors. New approaches are urgently needed for identifying relevant sources, pathways and potential impacts for implementation of suitable source management and remedial measures. We developed a method for risk assessing chemical stressors in these systems and applied the approach to a 16-km groundwater-fed stream corridor (Grindsted, Denmark). Three methods were combined: (i) in-stream contaminant mass discharge for source quantification, (ii) Toxic Units and (iii) environmental standards. An evaluation of the chemical quality of all three stream compartments - stream water, hyporheic zone, streambed sediment - made it possible to link chemical stressors to their respective sources and obtain new knowledge about source composition and origin. Moreover, toxic unit estimation and comparison to environmental standards revealed the stream water quality was substantially impaired by both geogenic and diffuse anthropogenic sources of metals along the entire corridor, while the streambed was less impacted. Quantification of the contaminant mass discharge originating from a former pharmaceutical factory revealed that several 100 kgs of chlorinated ethenes and pharmaceutical compounds discharge into the stream every year. The strongly reduced redox conditions in the plume result in high concentrations of dissolved iron and additionally release arsenic, generating the complex contaminant mixture found in the narrow discharge zone. The fingerprint of the plume was observed in the stream several km downgradient, while nutrients, inorganics and pesticides played a minor role for the stream health. The results emphasize that future investigations should include multiple compounds and stream compartments, and highlight the need for holistic approaches when risk assessing these dynamic systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
One-pot growth of two-dimensional lateral heterostructures via sequential edge-epitaxy
NASA Astrophysics Data System (ADS)
Sahoo, Prasana K.; Memaran, Shahriar; Xin, Yan; Balicas, Luis; Gutiérrez, Humberto R.
2018-01-01
Two-dimensional heterojunctions of transition-metal dichalcogenides have great potential for application in low-power, high-performance and flexible electro-optical devices, such as tunnelling transistors, light-emitting diodes, photodetectors and photovoltaic cells. Although complex heterostructures have been fabricated via the van der Waals stacking of different two-dimensional materials, the in situ fabrication of high-quality lateral heterostructures with multiple junctions remains a challenge. Transition-metal-dichalcogenide lateral heterostructures have been synthesized via single-step, two-step or multi-step growth processes. However, these methods lack the flexibility to control, in situ, the growth of individual domains. In situ synthesis of multi-junction lateral heterostructures does not require multiple exchanges of sources or reactors, a limitation in previous approaches as it exposes the edges to ambient contamination, compromises the homogeneity of domain size in periodic structures, and results in long processing times. Here we report a one-pot synthetic approach, using a single heterogeneous solid source, for the continuous fabrication of lateral multi-junction heterostructures consisting of monolayers of transition-metal dichalcogenides. The sequential formation of heterojunctions is achieved solely by changing the composition of the reactive gas environment in the presence of water vapour. This enables selective control of the water-induced oxidation and volatilization of each transition-metal precursor, as well as its nucleation on the substrate, leading to sequential edge-epitaxy of distinct transition-metal dichalcogenides. Photoluminescence maps confirm the sequential spatial modulation of the bandgap, and atomic-resolution images reveal defect-free lateral connectivity between the different transition-metal-dichalcogenide domains within a single crystal structure. Electrical transport measurements revealed diode-like responses across the junctions. Our new approach offers greater flexibility and control than previous methods for continuous growth of transition-metal-dichalcogenide-based multi-junction lateral heterostructures. These findings could be extended to other families of two-dimensional materials, and establish a foundation for the development of complex and atomically thin in-plane superlattices, devices and integrated circuits.
Multiple diffraction in an icosahedral Al-Cu-Fe quasicrystal
NASA Astrophysics Data System (ADS)
Fan, C. Z.; Weber, Th.; Deloudi, S.; Steurer, W.
2011-07-01
In order to reveal its influence on quasicrystal structure analysis, multiple diffraction (MD) effects in an icosahedral Al-Cu-Fe quasicrystal have been investigated in-house on an Oxford Diffraction four-circle diffractometer equipped with an Onyx™ CCD area detector and MoKα radiation. For that purpose, an automated approach for Renninger scans (ψ-scans) has been developed. Two weak reflections were chosen as the main reflections (called P) in the present measurements. As is well known for periodic crystals, it is also observed for this quasicrystal that the intensity of the main reflection may significantly increase if the simultaneous (H) and the coupling (P-H) reflections are both strong, while there is no obvious MD effect if one of them is weak. The occurrence of MD events during ψ-scans has been studied based on an ideal structure model and the kinematical MD theory. The reliability of the approach is revealed by the good agreement between simulation and experiment. It shows that the multiple diffraction effect is quite significant.
Essential pediatric hypertension: defining the educational needs of primary care pediatricians.
Cha, Stephen D; Chisolm, Deena J; Mahan, John D
2014-07-27
In order to better understand the educational needs regarding appropriate recognition, diagnosis and management of pediatric hypertension (HTN), we asked practicing pediatricians questions regarding their educational needs and comfort level on this topic. We conducted 4 focus group sessions that included 27 participants representing pediatric residents, adolescent medicine physicians, clinic based pediatricians and office based pediatricians. Each focus group session lasted for approximately an hour and 90 pages of total transcriptions were produced verbatim from audio recordings. Four reviewers read each transcript and themes were elucidated from these transcripts. Overall, 5 major themes related to educational needs and clinical concerns were found: utilization of resources to define blood pressure (BP), correct BP measurement method(s), co-morbidities, barriers to care, and experience level with HTN. Six minor themes were also identified: differences in BP measurement, accuracy of BP, recognition of HTN, practice pattern of care, education of families and patients, and differences in level of training. The focus group participants were also questioned on their preferences regarding educational methods (i.e. e-learning, small group sessions, self-study, large group presentations) and revealed varied teaching and learning preferences. There are multiple methods to approach education regarding pediatric HTN for primary care pediatricians based on provider preferences and multiple educational activities should be pursued to achieve best outcomes. Based on this data, the next direction will be to develop and deliver multiple educational methods and to evaluate the impact on practice patterns of care for children and adolescents with HTN.
Jonsen, Ian D; Myers, Ransom A; James, Michael C
2006-09-01
1. Biological and statistical complexity are features common to most ecological data that hinder our ability to extract meaningful patterns using conventional tools. Recent work on implementing modern statistical methods for analysis of such ecological data has focused primarily on population dynamics but other types of data, such as animal movement pathways obtained from satellite telemetry, can also benefit from the application of modern statistical tools. 2. We develop a robust hierarchical state-space approach for analysis of multiple satellite telemetry pathways obtained via the Argos system. State-space models are time-series methods that allow unobserved states and biological parameters to be estimated from data observed with error. We show that the approach can reveal important patterns in complex, noisy data where conventional methods cannot. 3. Using the largest Atlantic satellite telemetry data set for critically endangered leatherback turtles, we show that the diel pattern in travel rates of these turtles changes over different phases of their migratory cycle. While foraging in northern waters the turtles show similar travel rates during day and night, but on their southward migration to tropical waters travel rates are markedly faster during the day. These patterns are generally consistent with diving data, and may be related to changes in foraging behaviour. Interestingly, individuals that migrate southward to breed generally show higher daytime travel rates than individuals that migrate southward in a non-breeding year. 4. Our approach is extremely flexible and can be applied to many ecological analyses that use complex, sequential data.
Body Fat Percentage Prediction Using Intelligent Hybrid Approaches
Shao, Yuehjen E.
2014-01-01
Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804
NASA Astrophysics Data System (ADS)
Chen, Ying-Chih; Hand, Brian; Norton-Meier, Lori
2017-04-01
The purpose of this study was to investigate the various roles that early elementary teachers adopt when questioning, to scaffold dialogic interaction and students' cognitive responses for argumentative practices over time. Teacher questioning is a pivotal contributing factor that shapes the role teachers play in promoting dialogic interaction in argumentative practice and that different roles serve different functions for promoting students' conceptual understanding. The multiple-case study was designed as a follow-up study after a 4-year professional development program that emphasized an argument-based inquiry approach. Data sources included 30 lessons focusing on whole class discussion from three early elementary teachers' classes. Data were analyzed through two approaches: (1) constant comparative method and (2) enumerative approach. This study conceptualized four critical roles of teacher questioning—dispenser, moderator, coach, and participant—in light of the ownership of ideas and activities. The findings revealed two salient changes in teachers' use of questions and the relationships between teachers' question-asking and students' cognitive responses: (1) teachers increasingly used multiple roles in establishing argumentative discourse as they persistently implemented an argument-based inquiry approach, and (2) as teachers used multiple roles in establishing patterns of questioning and framing classroom interactions, higher levels of student cognitive responses were promoted. This study suggests that an essential component of teacher professional development should include the study of the various roles that teachers can play when questioning for establishing dialogic interaction in argumentation and that this development should consist of ongoing training with systematic support.
Lee, Jimin; Hustad, Katherine C.; Weismer, Gary
2014-01-01
Purpose Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystem approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method Nine acoustic variables reflecting different subsystems, and speech intelligibility, were measured in 22 children with CP. These children included 13 with a clinical diagnosis of dysarthria (SMI), and nine judged to be free of dysarthria (NSMI). Data from children with CP were compared to data from age-matched typically developing children (TD). Results Multiple acoustic variables reflecting the articulatory subsystem were different in the SMI group, compared to the NSMI and TD groups. A significant speech intelligibility prediction model was obtained with all variables entered into the model (Adjusted R-squared = .801). The articulatory subsystem showed the most substantial independent contribution (58%) to speech intelligibility. Incremental R-squared analyses revealed that any single variable explained less than 9% of speech intelligibility variability. Conclusions Children in the SMI group have articulatory subsystem problems as indexed by acoustic measures. As in the adult literature, the articulatory subsystem makes the primary contribution to speech intelligibility variance in dysarthria, with minimal or no contribution from other systems. PMID:24824584
Fast alignment-free sequence comparison using spaced-word frequencies.
Leimeister, Chris-Andre; Boden, Marcus; Horwege, Sebastian; Lindner, Sebastian; Morgenstern, Burkhard
2014-07-15
Alignment-free methods for sequence comparison are increasingly used for genome analysis and phylogeny reconstruction; they circumvent various difficulties of traditional alignment-based approaches. In particular, alignment-free methods are much faster than pairwise or multiple alignments. They are, however, less accurate than methods based on sequence alignment. Most alignment-free approaches work by comparing the word composition of sequences. A well-known problem with these methods is that neighbouring word matches are far from independent. To reduce the statistical dependency between adjacent word matches, we propose to use 'spaced words', defined by patterns of 'match' and 'don't care' positions, for alignment-free sequence comparison. We describe a fast implementation of this approach using recursive hashing and bit operations, and we show that further improvements can be achieved by using multiple patterns instead of single patterns. To evaluate our approach, we use spaced-word frequencies as a basis for fast phylogeny reconstruction. Using real-world and simulated sequence data, we demonstrate that our multiple-pattern approach produces better phylogenies than approaches relying on contiguous words. Our program is freely available at http://spaced.gobics.de/. © The Author 2014. Published by Oxford University Press.
Xiao, Xiaolin; Moreno-Moral, Aida; Rotival, Maxime; Bottolo, Leonardo; Petretto, Enrico
2014-01-01
Recent high-throughput efforts such as ENCODE have generated a large body of genome-scale transcriptional data in multiple conditions (e.g., cell-types and disease states). Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks) that can inform functional disease mechanisms and pathological pathways. Yet, genome-scale network analysis across conditions is significantly hampered by the paucity of robust and computationally-efficient methods. Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously across multiple conditions. Our method can accommodate weighted (and unweighted) networks of any size and can similarly use co-expression or raw gene expression input data, without hinging upon the definition and stability of the correlation used to assess gene co-expression. In simulation studies, we demonstrated distinctive advantages of our method over existing methods, which was able to recover accurately both common and condition-specific network-modules without entailing ad-hoc input parameters as required by other approaches. We applied our method to genome-scale and multi-tissue transcriptomic datasets from rats (microarray-based) and humans (mRNA-sequencing-based) and identified several common and tissue-specific subnetworks with functional significance, which were not detected by other methods. In humans we recapitulated the crosstalk between cell-cycle progression and cell-extracellular matrix interactions processes in ventricular zones during neocortex expansion and further, we uncovered pathways related to development of later cognitive functions in the cortical plate of the developing brain which were previously unappreciated. Analyses of seven rat tissues identified a multi-tissue subnetwork of co-expressed heat shock protein (Hsp) and cardiomyopathy genes (Bag3, Cryab, Kras, Emd, Plec), which was significantly replicated using separate failing heart and liver gene expression datasets in humans, thus revealing a conserved functional role for Hsp genes in cardiovascular disease.
Association analysis of multiple traits by an approach of combining P values.
Chen, Lili; Wang, Yong; Zhou, Yajing
2018-03-01
Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.
Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek
2018-03-01
Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Lee, Min Sun; Kim, Joong Hyun; Paeng, Jin Chul; Kang, Keon Wook; Jeong, Jae Min; Lee, Dong Soo; Lee, Jae Sung
2017-12-14
Personalized dosimetry with high accuracy is becoming more important because of the growing interests in personalized medicine and targeted radionuclide therapy. Voxel-based dosimetry using dose point kernel or voxel S-value (VSV) convolution is available. However, these approaches do not consider medium heterogeneity. Here, we propose a new method for whole-body voxel-based personalized dosimetry for heterogeneous media with non-uniform activity distributions, which is referred to as the multiple VSV approach. Methods: The multiple numbers (N) of VSVs for media with different densities covering the whole-body density ranges were used instead of using only a single VSV for water. The VSVs were pre-calculated using GATE Monte Carlo simulation; those were convoluted with the time-integrated activity to generate density-specific dose maps. Computed tomography-based segmentation was conducted to generate binary maps for each density region. The final dose map was acquired by the summation of N segmented density-specific dose maps. We tested several sets of VSVs with different densities: N = 1 (single water VSV), 4, 6, 8, 10, and 20. To validate the proposed method, phantom and patient studies were conducted and compared with direct Monte Carlo, which was considered the ground truth. Finally, patient dosimetry (10 subjects) was conducted using the multiple VSV approach and compared with the single VSV and organ-based dosimetry approaches. Errors at the voxel- and organ-levels were reported for eight organs. Results: In the phantom and patient studies, the multiple VSV approach showed significant improvements regarding voxel-level errors, especially for the lung and bone regions. As N increased, voxel-level errors decreased, although some overestimations were observed at lung boundaries. In the case of multiple VSVs ( N = 8), we achieved voxel-level errors of 2.06%. In the dosimetry study, our proposed method showed much improved results compared to the single VSV and organ-based dosimetry. Errors at the organ-level were -6.71%, 2.17%, and 227.46% for the single VSV, multiple VSV, and organ-based dosimetry, respectively. Conclusion: The multiple VSV approach for heterogeneous media with non-uniform activity distributions offers fast personalized dosimetry at whole-body level, yielding results comparable to those of the direct Monte Carlo approach. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Li, Tatyana; Menold, Natalja
2018-01-01
A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and…
MANGO: a new approach to multiple sequence alignment.
Zhang, Zefeng; Lin, Hao; Li, Ming
2007-01-01
Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.
Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D
2014-03-01
Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Quantifying cause-related mortality by weighting multiple causes of death
Moreno-Betancur, Margarita; Lamarche-Vadel, Agathe; Rey, Grégoire
2016-01-01
Abstract Objective To investigate a new approach to calculating cause-related standardized mortality rates that involves assigning weights to each cause of death reported on death certificates. Methods We derived cause-related standardized mortality rates from death certificate data for France in 2010 using: (i) the classic method, which considered only the underlying cause of death; and (ii) three novel multiple-cause-of-death weighting methods, which assigned weights to multiple causes of death mentioned on death certificates: the first two multiple-cause-of-death methods assigned non-zero weights to all causes mentioned and the third assigned non-zero weights to only the underlying cause and other contributing causes that were not part of the main morbid process. As the sum of the weights for each death certificate was 1, each death had an equal influence on mortality estimates and the total number of deaths was unchanged. Mortality rates derived using the different methods were compared. Findings On average, 3.4 causes per death were listed on each certificate. The standardized mortality rate calculated using the third multiple-cause-of-death weighting method was more than 20% higher than that calculated using the classic method for five disease categories: skin diseases, mental disorders, endocrine and nutritional diseases, blood diseases and genitourinary diseases. Moreover, this method highlighted the mortality burden associated with certain diseases in specific age groups. Conclusion A multiple-cause-of-death weighting approach to calculating cause-related standardized mortality rates from death certificate data identified conditions that contributed more to mortality than indicated by the classic method. This new approach holds promise for identifying underrecognized contributors to mortality. PMID:27994280
Multivariate longitudinal data analysis with censored and intermittent missing responses.
Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun
2018-05-08
The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.
Lünse, Christina E.; Corbino, Keith A.; Ames, Tyler D.; Nelson, James W.; Roth, Adam; Perkins, Kevin R.; Sherlock, Madeline E.
2017-01-01
Abstract The discovery of structured non-coding RNAs (ncRNAs) in bacteria can reveal new facets of biology and biochemistry. Comparative genomics analyses executed by powerful computer algorithms have successfully been used to uncover many novel bacterial ncRNA classes in recent years. However, this general search strategy favors the discovery of more common ncRNA classes, whereas progressively rarer classes are correspondingly more difficult to identify. In the current study, we confront this problem by devising several methods to select subsets of intergenic regions that can concentrate these rare RNA classes, thereby increasing the probability that comparative sequence analysis approaches will reveal their existence. By implementing these methods, we discovered 224 novel ncRNA classes, which include ROOL RNA, an RNA class averaging 581 nt and present in multiple phyla, several highly conserved and widespread ncRNA classes with properties that suggest sophisticated biochemical functions and a multitude of putative cis-regulatory RNA classes involved in a variety of biological processes. We expect that further research on these newly found RNA classes will reveal additional aspects of novel biology, and allow for greater insights into the biochemistry performed by ncRNAs. PMID:28977401
A comparison of multiple imputation methods for incomplete longitudinal binary data.
Yamaguchi, Yusuke; Misumi, Toshihiro; Maruo, Kazushi
2018-01-01
Longitudinal binary data are commonly encountered in clinical trials. Multiple imputation is an approach for getting a valid estimation of treatment effects under an assumption of missing at random mechanism. Although there are a variety of multiple imputation methods for the longitudinal binary data, a limited number of researches have reported on relative performances of the methods. Moreover, when focusing on the treatment effect throughout a period that has often been used in clinical evaluations of specific disease areas, no definite investigations comparing the methods have been available. We conducted an extensive simulation study to examine comparative performances of six multiple imputation methods available in the SAS MI procedure for longitudinal binary data, where two endpoints of responder rates at a specified time point and throughout a period were assessed. The simulation study suggested that results from naive approaches of a single imputation with non-responders and a complete case analysis could be very sensitive against missing data. The multiple imputation methods using a monotone method and a full conditional specification with a logistic regression imputation model were recommended for obtaining unbiased and robust estimations of the treatment effect. The methods were illustrated with data from a mental health research.
Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C
2018-01-01
Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Essential pediatric hypertension: defining the educational needs of primary care pediatricians
2014-01-01
Background In order to better understand the educational needs regarding appropriate recognition, diagnosis and management of pediatric hypertension (HTN), we asked practicing pediatricians questions regarding their educational needs and comfort level on this topic. Methods We conducted 4 focus group sessions that included 27 participants representing pediatric residents, adolescent medicine physicians, clinic based pediatricians and office based pediatricians. Each focus group session lasted for approximately an hour and 90 pages of total transcriptions were produced verbatim from audio recordings. Results Four reviewers read each transcript and themes were elucidated from these transcripts. Overall, 5 major themes related to educational needs and clinical concerns were found: utilization of resources to define blood pressure (BP), correct BP measurement method(s), co-morbidities, barriers to care, and experience level with HTN. Six minor themes were also identified: differences in BP measurement, accuracy of BP, recognition of HTN, practice pattern of care, education of families and patients, and differences in level of training. The focus group participants were also questioned on their preferences regarding educational methods (i.e. e-learning, small group sessions, self-study, large group presentations) and revealed varied teaching and learning preferences. Conclusions There are multiple methods to approach education regarding pediatric HTN for primary care pediatricians based on provider preferences and multiple educational activities should be pursued to achieve best outcomes. Based on this data, the next direction will be to develop and deliver multiple educational methods and to evaluate the impact on practice patterns of care for children and adolescents with HTN. PMID:25063988
Probing protein flexibility reveals a mechanism for selective promiscuity
Pabon, Nicolas A; Camacho, Carlos J
2017-01-01
Many eukaryotic regulatory proteins adopt distinct bound and unbound conformations, and use this structural flexibility to bind specifically to multiple partners. However, we lack an understanding of how an interface can select some ligands, but not others. Here, we present a molecular dynamics approach to identify and quantitatively evaluate the interactions responsible for this selective promiscuity. We apply this approach to the anticancer target PD-1 and its ligands PD-L1 and PD-L2. We discover that while unbound PD-1 exhibits a hard-to-drug hydrophilic interface, conserved specific triggers encoded in the cognate ligands activate a promiscuous binding pathway that reveals a flexible hydrophobic binding cavity. Specificity is then established by additional contacts that stabilize the PD-1 cavity into distinct bound-like modes. Collectively, our studies provide insight into the structural basis and evolution of multiple binding partners, and also suggest a biophysical approach to exploit innate binding pathways to drug seemingly undruggable targets. DOI: http://dx.doi.org/10.7554/eLife.22889.001 PMID:28432789
Tanumihardjo, Sherry A; Mokhtar, Najat; Haskell, Marjorie J; Brown, Kenneth H
2016-06-01
Vitamin A (VA) deficiency (VAD) is still a concern in many parts of the world, and multiple intervention strategies are being implemented to reduce the prevalence of VAD and associated morbidity and mortality. Because some individuals within a population may be exposed to multiple VA interventions, concerns have been raised about the possible risk of hypervitaminosis A. A consultative meeting was held in Vienna, Austria, in March 2014 to (1) review current knowledge concerning the safety and effectiveness of large-scale programs to control VAD, (2) develop a related research agenda, and (3) review current available methods to assess VA status and risk of hypervitaminosis A. Multiple countries were represented and shared their experiences using a variety of assessment methods, including retinol isotope dilution (RID) techniques. Discussion included next steps to refine assessment methodology, investigate RID limitations under different conditions, and review programmatic approaches to ensure VA adequacy and avoid excessive intakes. Fortification programs have resulted in adequate VA status in Guatemala, Zambia, and parts of Cameroon. Dietary patterns in several countries revealed that some people may consume excessive preformed VA from fortified foods. Additional studies are needed to compare biomarkers of tissue damage to RID methods during hypervitaminosis A and to determine what other biomarkers can be used to assess excessive preformed VA intake. © The Author(s) 2016.
Fisch-Muller, Sonia; Mol, Jan H A; Covain, Raphaël
2018-01-01
Characterizing and naming species becomes more and more challenging due to the increasing difficulty of accurately delineating specific bounderies. In this context, integrative taxonomy aims to delimit taxonomic units by leveraging the complementarity of multiple data sources (geography, morphology, genetics, etc.). However, while the theoretical framework of integrative taxonomy has been explicitly stated, methods for the simultaneous analysis of multiple data sets are poorly developed and in many cases different information sources are still explored successively. Multi-table methods developed in the field of community ecology provide such an intregrative framework. In particular, multiple co-inertia analysis is flexible enough to allow the integration of morphological, distributional, and genetic data in the same analysis. We have applied this powerfull approach to delimit species boundaries in a group of poorly differentiated catfishes belonging to the genus Guyanancistrus from the Guianas region of northeastern South America. Because the species G. brevispinis has been claimed to be a species complex consisting of five species, particular attention was paid to taxon. Separate analyses indicated the presence of eight distinct species of Guyanancistrus, including five new species and one new genus. However, none of the preliminary analyses revealed different lineages within G. brevispinis, and the multi-table analysis revealed three intraspecific lineages. After taxonomic clarifications and description of the new genus, species and subspecies, a reappraisal of the biogeography of Guyanancistrus members was performed. This analysis revealed three distinct dispersals from the Upper reaches of Amazonian tributaries toward coastal rivers of the Eastern Guianas Ecoregion. The central role played by the Maroni River, as gateway from the Amazon basin, was confirmed. The Maroni River was also found to be a center of speciation for Guyanancistrus (with three species and two subspecies), as well as a source of dispersal of G. brevispinis toward the other main basins of the Eastern Guianas.
Fisch-Muller, Sonia; Mol, Jan H. A.
2018-01-01
Characterizing and naming species becomes more and more challenging due to the increasing difficulty of accurately delineating specific bounderies. In this context, integrative taxonomy aims to delimit taxonomic units by leveraging the complementarity of multiple data sources (geography, morphology, genetics, etc.). However, while the theoretical framework of integrative taxonomy has been explicitly stated, methods for the simultaneous analysis of multiple data sets are poorly developed and in many cases different information sources are still explored successively. Multi-table methods developed in the field of community ecology provide such an intregrative framework. In particular, multiple co-inertia analysis is flexible enough to allow the integration of morphological, distributional, and genetic data in the same analysis. We have applied this powerfull approach to delimit species boundaries in a group of poorly differentiated catfishes belonging to the genus Guyanancistrus from the Guianas region of northeastern South America. Because the species G. brevispinis has been claimed to be a species complex consisting of five species, particular attention was paid to taxon. Separate analyses indicated the presence of eight distinct species of Guyanancistrus, including five new species and one new genus. However, none of the preliminary analyses revealed different lineages within G. brevispinis, and the multi-table analysis revealed three intraspecific lineages. After taxonomic clarifications and description of the new genus, species and subspecies, a reappraisal of the biogeography of Guyanancistrus members was performed. This analysis revealed three distinct dispersals from the Upper reaches of Amazonian tributaries toward coastal rivers of the Eastern Guianas Ecoregion. The central role played by the Maroni River, as gateway from the Amazon basin, was confirmed. The Maroni River was also found to be a center of speciation for Guyanancistrus (with three species and two subspecies), as well as a source of dispersal of G. brevispinis toward the other main basins of the Eastern Guianas. PMID:29298344
Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.
Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z
2012-07-01
Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.
IR-IR Conformation Specific Spectroscopy of Na+(Glucose) Adducts
NASA Astrophysics Data System (ADS)
Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.; Garand, Etienne
2018-01-01
We report an IR-IR double resonance study of the structural landscape present in the Na+(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na+(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctive Na+ coordination. [Figure not available: see fulltext.
Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.
Ji, L; Danuser, G
2005-12-01
We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
2011-01-01
Background Copy number aberrations (CNAs) are an important molecular signature in cancer initiation, development, and progression. However, these aberrations span a wide range of chromosomes, making it hard to distinguish cancer related genes from other genes that are not closely related to cancer but are located in broadly aberrant regions. With the current availability of high-resolution data sets such as single nucleotide polymorphism (SNP) microarrays, it has become an important issue to develop a computational method to detect driving genes related to cancer development located in the focal regions of CNAs. Results In this study, we introduce a novel method referred to as the wavelet-based identification of focal genomic aberrations (WIFA). The use of the wavelet analysis, because it is a multi-resolution approach, makes it possible to effectively identify focal genomic aberrations in broadly aberrant regions. The proposed method integrates multiple cancer samples so that it enables the detection of the consistent aberrations across multiple samples. We then apply this method to glioblastoma multiforme and lung cancer data sets from the SNP microarray platform. Through this process, we confirm the ability to detect previously known cancer related genes from both cancer types with high accuracy. Also, the application of this approach to a lung cancer data set identifies focal amplification regions that contain known oncogenes, though these regions are not reported using a recent CNAs detecting algorithm GISTIC: SMAD7 (chr18q21.1) and FGF10 (chr5p12). Conclusions Our results suggest that WIFA can be used to reveal cancer related genes in various cancer data sets. PMID:21569311
The educational impact of assessment: A comparison of DOPS and MCQs
Cobb, Kate A.; Brown, George; Jaarsma, Debbie A. D. C.; Hammond, Richard A.
2013-01-01
Aim To evaluate the impact of two different assessment formats on the approaches to learning of final year veterinary students. The relationship between approach to learning and examination performance was also investigated. Method An 18-item version of the Study Process Questionnaire (SPQ) was sent to 87 final year students. Each student responded to the questionnaire with regards to DOPS (Direct Observation of Procedural Skills) and a Multiple Choice Examination (MCQ). Semi-structured interviews were conducted with 16 of the respondents to gain a deeper insight into the students’ perception of assessment. Results Students’ adopted a deeper approach to learning for DOPS and a more surface approach with MCQs. There was a positive correlation between an achieving approach to learning and examination performance. Analysis of the qualitative data revealed that deep, surface and achieving approaches were reported by the students and seven major influences on their approaches to learning were identified: motivation, purpose, consequence, acceptability, feedback, time pressure and the individual difference of the students. Conclusions The format of DOPS has a positive influence on approaches to learning. There is a conflict for students between preparing for final examinations and preparing for clinical practice. PMID:23808609
Comparing strategies to assess multiple behavior change in behavioral intervention studies.
Drake, Bettina F; Quintiliani, Lisa M; Sapp, Amy L; Li, Yi; Harley, Amy E; Emmons, Karen M; Sorensen, Glorian
2013-03-01
Alternatives to individual behavior change methods have been proposed, however, little has been done to investigate how these methods compare. To explore four methods that quantify change in multiple risk behaviors targeting four common behaviors. We utilized data from two cluster-randomized, multiple behavior change trials conducted in two settings: small businesses and health centers. Methods used were: (1) summative; (2) z-score; (3) optimal linear combination; and (4) impact score. In the Small Business study, methods 2 and 3 revealed similar outcomes. However, physical activity did not contribute to method 3. In the Health Centers study, similar results were found with each of the methods. Multivitamin intake contributed significantly more to each of the summary measures than other behaviors. Selection of methods to assess multiple behavior change in intervention trials must consider study design, and the targeted population when determining the appropriate method/s to use.
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
Yi, Ming; Mudunuri, Uma; Che, Anney; Stephens, Robert M
2009-06-29
One of the challenges in the analysis of microarray data is to integrate and compare the selected (e.g., differential) gene lists from multiple experiments for common or unique underlying biological themes. A common way to approach this problem is to extract common genes from these gene lists and then subject these genes to enrichment analysis to reveal the underlying biology. However, the capacity of this approach is largely restricted by the limited number of common genes shared by datasets from multiple experiments, which could be caused by the complexity of the biological system itself. We now introduce a new Pathway Pattern Extraction Pipeline (PPEP), which extends the existing WPS application by providing a new pathway-level comparative analysis scheme. To facilitate comparing and correlating results from different studies and sources, PPEP contains new interfaces that allow evaluation of the pathway-level enrichment patterns across multiple gene lists. As an exploratory tool, this analysis pipeline may help reveal the underlying biological themes at both the pathway and gene levels. The analysis scheme provided by PPEP begins with multiple gene lists, which may be derived from different studies in terms of the biological contexts, applied technologies, or methodologies. These lists are then subjected to pathway-level comparative analysis for extraction of pathway-level patterns. This analysis pipeline helps to explore the commonality or uniqueness of these lists at the level of pathways or biological processes from different but relevant biological systems using a combination of statistical enrichment measurements, pathway-level pattern extraction, and graphical display of the relationships of genes and their associated pathways as Gene-Term Association Networks (GTANs) within the WPS platform. As a proof of concept, we have used the new method to analyze many datasets from our collaborators as well as some public microarray datasets. This tool provides a new pathway-level analysis scheme for integrative and comparative analysis of data derived from different but relevant systems. The tool is freely available as a Pathway Pattern Extraction Pipeline implemented in our existing software package WPS, which can be obtained at http://www.abcc.ncifcrf.gov/wps/wps_index.php.
Mariel, Petr; Hoyos, David; Artabe, Alaitz; Guevara, C Angelo
2018-08-15
Endogeneity is an often neglected issue in empirical applications of discrete choice modelling despite its severe consequences in terms of inconsistent parameter estimation and biased welfare measures. This article analyses the performance of the multiple indicator solution method to deal with endogeneity arising from omitted explanatory variables in discrete choice models for environmental valuation. We also propose and illustrate a factor analysis procedure for the selection of the indicators in practice. Additionally, the performance of this method is compared with the recently proposed hybrid choice modelling framework. In an empirical application we find that the multiple indicator solution method and the hybrid model approach provide similar results in terms of welfare estimates, although the multiple indicator solution method is more parsimonious and notably easier to implement. The empirical results open a path to explore the performance of this method when endogeneity is thought to have a different cause or under a different set of indicators. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun
2015-05-01
Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.
Discovering time-lagged rules from microarray data using gene profile classifiers
2011-01-01
Background Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes. Results This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations. Conclusions A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation. PMID:21524308
Na, Wongi S.; Baek, Jongdae
2017-01-01
The emergence of composite materials has revolutionized the approach to building engineering structures. With the number of applications for composites increasing every day, maintaining structural integrity is of utmost importance. For composites, adhesive bonding is usually the preferred choice over the mechanical fastening method, and monitoring for delamination is an essential factor in the field of composite materials. In this study, a non-destructive method known as the electromechanical impedance method is used with an approach of monitoring multiple areas by specifying certain frequency ranges to correspond to a certain test specimen. Experiments are conducted using various numbers of stacks created by attaching glass fiber epoxy composite plates onto one another, and two different debonding damage types are introduced to evaluate the performance of the multiple monitoring electromechanical impedance method. PMID:28629194
Dimou, Niki L; Pantavou, Katerina G; Bagos, Pantelis G
2017-09-01
Apolipoprotein E (ApoE) is potentially a genetic risk factor for the development of left ventricular failure (LVF), the main cause of death in beta-thalassemia homozygotes. In the present study, we synthesize the results of independent studies examining the effect of ApoE on LVF development in thalassemic patients through a meta-analytic approach. However, all studies report more than one outcome, as patients are classified into three groups according to the severity of the symptoms and the genetic polymorphism. Thus, a multivariate meta-analytic method that addresses simultaneously multiple exposures and multiple comparison groups was developed. Four individual studies were included in the meta-analysis involving 613 beta-thalassemic patients and 664 controls. The proposed method that takes into account the correlation of log odds ratios (log(ORs)), revealed a statistically significant overall association (P-value = 0.009), mainly attributed to the contrast of E4 versus E3 allele for patients with evidence (OR: 2.32, 95% CI: 1.19, 4.53) or patients with clinical and echocardiographic findings (OR: 3.34, 95% CI: 1.78, 6.26) of LVF. This study suggests that E4 is a genetic risk factor for LVF in beta-thalassemia major. The presented multivariate approach can be applied in several fields of research. © 2017 John Wiley & Sons Ltd/University College London.
NASA Astrophysics Data System (ADS)
Guinn, Emily J.; Jagannathan, Bharat; Marqusee, Susan
2015-04-01
A fundamental question in protein folding is whether proteins fold through one or multiple trajectories. While most experiments indicate a single pathway, simulations suggest proteins can fold through many parallel pathways. Here, we use a combination of chemical denaturant, mechanical force and site-directed mutations to demonstrate the presence of multiple unfolding pathways in a simple, two-state folding protein. We show that these multiple pathways have structurally different transition states, and that seemingly small changes in protein sequence and environment can strongly modulate the flux between the pathways. These results suggest that in vivo, the crowded cellular environment could strongly influence the mechanisms of protein folding and unfolding. Our study resolves the apparent dichotomy between experimental and theoretical studies, and highlights the advantage of using a multipronged approach to reveal the complexities of a protein's free-energy landscape.
Socioscientific Argumentation: The effects of content knowledge and morality
NASA Astrophysics Data System (ADS)
Sadler, Troy D.; Donnelly, Lisa A.
2006-10-01
Broad support exists within the science education community for the incorporation of socioscientific issues (SSI) and argumentation in the science curriculum. This study investigates how content knowledge and morality contribute to the quality of SSI argumentation among high school students. We employed a mixed-methods approach: 56 participants completed tests of content knowledge and moral reasoning as well as interviews, related to SSI topics, which were scored based on a rubric for argumentation quality. Multiple regression analyses revealed no statistically significant relationships among content knowledge, moral reasoning, and argumentation quality. Qualitative analyses of the interview transcripts supported the quantitative results in that participants very infrequently revealed patterns of content knowledge application. However, most of the participants did perceive the SSI as moral problems. We propose a “Threshold Model of Knowledge Transfer” to account for the relationship between content knowledge and argumentation quality. Implications for science education are discussed.
Clark, Maria T; Clark, Richard J; Toohey, Shane; Bradbury-Jones, Caroline
2017-03-01
Acupuncture shows promise as a treatment for plantar heel pain (PHP) or plantar fasciitis (PF), but data heterogeneity has undermined demonstration of efficacy. Recognising that acupuncture is a diverse field of practice, the aim of this study was to gain a broader, global perspective on the different approaches and rationales used in the application of acupuncture in PHP. We built upon an earlier systematic review (which was limited by the necessity of a methodological focus on efficacy) using the critical interpretive synthesis (CIS) method to draw upon a wider international sample of 25 clinical sources, including case reports and case series. Multiple tracks of analysis led to an emergent synthesis. Findings are presented at three levels: primary (summarised data); secondary (patterns observed); and tertiary (emergent synthesis). Multiple treatments and rationales were documented but no single approach dominated. Notable contradictions emerged such as the application of moxibustion by some authors and ice by others. Synthesis of findings revealed a 'patchwork' of factors influencing the approaches taken. The complexity of the field of acupuncture was illustrated through the 'lens' of PHP. The 'patchwork' metaphor provides a unifying framework for a previously divergent community of practice and research. Several directions for future research were identified, such as: importance of prior duration; existence of diagnostic subgroups; and how practitioners make clinical decisions and report their findings. CIS was found to provide visibility for multiple viewpoints in developing theory and modelling the processes of 'real world' practice by acupuncturists addressing the problem of PHP. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Ha, Kee-Yong; Kim, Hyun-Woo
2013-01-01
Multiple myeloma, a multicentric hematological malignancy, is the most common primary tumor of the spine. As epidural myeloma causing spinal cord compression is a rare condition, its therapeutic approach and clinical results have been reported to be diverse, and no clear guidelines for therapeutic decision have been established. Three patients presented with progressive paraplegia and sensory disturbance. Image and serological studies revealed multiple myeloma and spinal cord compression caused by epidural myeloma. Emergency radiotherapy and steroid therapy were performed in all three cases. However, their clinical courses and results were distinctly different. Following review of our cases and the related literature, we suggest a systematic therapeutic approach for these patients to achieve better clinical results. PMID:24175035
ERIC Educational Resources Information Center
Lee, Jimin; Hustad, Katherine C.; Weismer, Gary
2014-01-01
Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…
Hayashi, Yoshikazu; Yamamoto, Hironori; Kita, Hiroto; Sunada, Keijiro; Sato, Hiroyuki; Yano, Tomonori; Iwamoto, Michiko; Sekine, Yutaka; Miyata, Tomohiko; Kuno, Akiko; Iwaki, Takaaki; Kawamura, Yoshiyuki; Ajibe, Hironari; Ido, Kenichi; Sugano, Kentaro
2005-01-01
AIM: To clarify clinical features of the NSAID-induced small bowel lesions using a new method of endoscopy. METHODS: This is a retrospective study and we analyzed seven patients with small bowel lesions while taking NSAIDs among 61 patients who had undergone double-balloon endoscopy because of gastro-intestinal bleeding or anemia between September 2000 and March 2004, at Jichi Medical School Hospital in Japan. Neither conventional EGD nor colonoscopy revealed any lesions of potential bleeding sources including ulcerations. Double-balloon endoscopy was carried out from oral approach in three patients, from anal approach in three patients, and from both approaches in one patient. RESULTS: Ulcers or erosions were observed in the ileum in six patients and in the jejunum in one patient, respectively. The ulcers were multiple in all the patients with different features from tiny punched out ulcers to deep ulcerations with oozing hemorrhage or scar. All the patients recovered uneventfully and had full resolution of symptoms after suspension of the drug. CONCLUSION: NSAIDs can induce injuries in the small bowel even in patients without any lesions in both the stomach and colon. PMID:16097059
NASA Astrophysics Data System (ADS)
Kalid, Ori; Toledo Warshaviak, Dora; Shechter, Sharon; Sherman, Woody; Shacham, Sharon
2012-11-01
We present the Consensus Induced Fit Docking (cIFD) approach for adapting a protein binding site to accommodate multiple diverse ligands for virtual screening. This novel approach results in a single binding site structure that can bind diverse chemotypes and is thus highly useful for efficient structure-based virtual screening. We first describe the cIFD method and its validation on three targets that were previously shown to be challenging for docking programs (COX-2, estrogen receptor, and HIV reverse transcriptase). We then demonstrate the application of cIFD to the challenging discovery of irreversible Crm1 inhibitors. We report the identification of 33 novel Crm1 inhibitors, which resulted from the testing of 402 purchased compounds selected from a screening set containing 261,680 compounds. This corresponds to a hit rate of 8.2 %. The novel Crm1 inhibitors reveal diverse chemical structures, validating the utility of the cIFD method in a real-world drug discovery project. This approach offers a pragmatic way to implicitly account for protein flexibility without the additional computational costs of ensemble docking or including full protein flexibility during virtual screening.
Paul, Fiona; Otte, Jürgen; Schmitt, Imke; Dal Grande, Francesco
2018-06-05
The implementation of HTS (high-throughput sequencing) approaches is rapidly changing our understanding of the lichen symbiosis, by uncovering high bacterial and fungal diversity, which is often host-specific. Recently, HTS methods revealed the presence of multiple photobionts inside a single thallus in several lichen species. This differs from Sanger technology, which typically yields a single, unambiguous algal sequence per individual. Here we compared HTS and Sanger methods for estimating the diversity of green algal symbionts within lichen thalli using 240 lichen individuals belonging to two species of lichen-forming fungi. According to HTS data, Sanger technology consistently yielded the most abundant photobiont sequence in the sample. However, if the second most abundant photobiont exceeded 30% of the total HTS reads in a sample, Sanger sequencing generally failed. Our results suggest that most lichen individuals in the two analyzed species, Lasallia hispanica and L. pustulata, indeed contain a single, predominant green algal photobiont. We conclude that Sanger sequencing is a valid approach to detect the dominant photobionts in lichen individuals and populations. We discuss which research areas in lichen ecology and evolution will continue to benefit from Sanger sequencing, and which areas will profit from HTS approaches to assessing symbiont diversity.
Visualizing Matrix Multiplication
ERIC Educational Resources Information Center
Daugulis, Peteris; Sondore, Anita
2018-01-01
Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…
Phase-field approach to implicit solvation of biomolecules with Coulomb-field approximation
NASA Astrophysics Data System (ADS)
Zhao, Yanxiang; Kwan, Yuen-Yick; Che, Jianwei; Li, Bo; McCammon, J. Andrew
2013-07-01
A phase-field variational implicit-solvent approach is developed for the solvation of charged molecules. The starting point of such an approach is the representation of a solute-solvent interface by a phase field that takes one value in the solute region and another in the solvent region, with a smooth transition from one to the other on a small transition layer. The minimization of an effective free-energy functional of all possible phase fields determines the equilibrium conformations and free energies of an underlying molecular system. All the surface energy, the solute-solvent van der Waals interaction, and the electrostatic interaction are coupled together self-consistently through a phase field. The surface energy results from the minimization of a double-well potential and the gradient of a field. The electrostatic interaction is described by the Coulomb-field approximation. Accurate and efficient methods are designed and implemented to numerically relax an underlying charged molecular system. Applications to single ions, a two-plate system, and a two-domain protein reveal that the new theory and methods can capture capillary evaporation in hydrophobic confinement and corresponding multiple equilibrium states as found in molecular dynamics simulations. Comparisons of the phase-field and the original sharp-interface variational approaches are discussed.
Multiple imputation of rainfall missing data in the Iberian Mediterranean context
NASA Astrophysics Data System (ADS)
Miró, Juan Javier; Caselles, Vicente; Estrela, María José
2017-11-01
Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-07-01
This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.
Multiple-methods investigation of recharge at a humid-region fractured rock site, Pennsylvania, USA
Heppner, C.S.; Nimmo, J.R.; Folmar, G.J.; Gburek, W.J.; Risser, D.W.
2007-01-01
Lysimeter-percolate and well-hydrograph analyses were combined to evaluate recharge for the Masser Recharge Site (central Pennsylvania, USA). In humid regions, aquifer recharge through an unconfined low-porosity fractured-rock aquifer can cause large magnitude water-table fluctuations over short time scales. The unsaturated hydraulic characteristics of the subsurface porous media control the magnitude and timing of these fluctuations. Data from multiple sets of lysimeters at the site show a highly seasonal pattern of percolate and exhibit variability due to both installation factors and hydraulic property heterogeneity. Individual event analysis of well hydrograph data reveals the primary influences on water-table response, namely rainfall depth, rainfall intensity, and initial water-table depth. Spatial and seasonal variability in well response is also evident. A new approach for calculating recharge from continuous water-table elevation records using a master recession curve (MRC) is demonstrated. The recharge estimated by the MRC approach when assuming a constant specific yield is seasonal to a lesser degree than the recharge estimate resulting from the lysimeter analysis. Partial reconciliation of the two recharge estimates is achieved by considering a conceptual model of flow processes in the highly-heterogeneous underlying fractured porous medium. ?? Springer-Verlag 2007.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Perthold, Jan Walther; Oostenbrink, Chris
2018-05-17
Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.
Methodological issues underlying multiple decrement life table analysis.
Mode, C J; Avery, R C; Littman, G S; Potter, R G
1977-02-01
In this paper, the actuarial method of multiple decrement life table analysis of censored, longitudinal data is examined. The discussion is organized in terms of the first segment of usage of an intrauterine device. Weaknesses of the actuarial approach are pointed out, and an alternative approach, based on the classical model of competing risks, is proposed. Finally, the actuarial and the alternative method of analyzing censored data are compared, using data from the Taichung Medical Study on Intrauterine Devices.
NASA Astrophysics Data System (ADS)
Belica, L.; Mitasova, H.; Caldwell, P.; McCarter, J. B.; Nelson, S. A. C.
2017-12-01
Thermal regimes of forested headwater streams continue to be an area of active research as climatic, hydrologic, and land cover changes can influence water temperature, a key aspect of aquatic ecosystems. Widespread monitoring of stream temperatures have provided an important data source, yielding insights on the temporal and spatial patterns and the underlying processes that influence stream temperature. However, small forested streams remain challenging to model due to the high spatial and temporal variability of stream temperatures and the climatic and hydrologic conditions that drive them. Technological advances and increased computational power continue to provide new tools and measurement methods and have allowed spatially explicit analyses of dynamic natural systems at greater temporal resolutions than previously possible. With the goal of understanding how current stream temperature patterns and processes may respond to changing landcover and hydroclimatoligical conditions, we combined high-resolution, spatially explicit geospatial modeling with deterministic heat flux modeling approaches using data sources that ranged from traditional hydrological and climatological measurements to emerging remote sensing techniques. Initial analyses of stream temperature monitoring data revealed that high temporal resolution (5 minutes) and measurement resolutions (<0.1°C) were needed to adequately describe diel stream temperature patterns and capture the differences between paired 1st order and 4th order forest streams draining north and south facing slopes. This finding along with geospatial models of subcanopy solar radiation and channel morphology were used to develop hypotheses and guide field data collection for further heat flux modeling. By integrating multiple approaches and optimizing data resolution for the processes being investigated, small, but ecologically significant differences in stream thermal regimes were revealed. In this case, multi-approach research contributed to the identification of the dominant mechanisms driving stream temperature in the study area and advanced our understanding of the current thermal fluxes and how they may change as environmental conditions change in the future.
Multiple Acquisition InSAR Analysis: Persistent Scatterer and Small Baseline Approaches
NASA Astrophysics Data System (ADS)
Hooper, A.
2006-12-01
InSAR techniques that process data from multiple acquisitions enable us to form time series of deformation and also allow us to reduce error terms present in single interferograms. There are currently two broad categories of methods that deal with multiple images: persistent scatterer methods and small baseline methods. The persistent scatterer approach relies on identifying pixels whose scattering properties vary little with time and look angle. Pixels that are dominated by a singular scatterer best meet these criteria; therefore, images are processed at full resolution to both increase the chance of there being only one dominant scatterer present, and to reduce the contribution from other scatterers within each pixel. In images where most pixels contain multiple scatterers of similar strength, even at the highest possible resolution, the persistent scatterer approach is less optimal, as the scattering characteristics of these pixels vary substantially with look angle. In this case, an approach that interferes only pairs of images for which the difference in look angle is small makes better sense, and resolution can be sacrificed to reduce the effects of the look angle difference by band-pass filtering. This is the small baseline approach. Existing small baseline methods depend on forming a series of multilooked interferograms and unwrapping each one individually. This approach fails to take advantage of two of the benefits of processing multiple acquisitions, however, which are usually embodied in persistent scatterer methods: the ability to find and extract the phase for single-look pixels with good signal-to-noise ratio that are surrounded by noisy pixels, and the ability to unwrap more robustly in three dimensions, the third dimension being that of time. We have developed, therefore, a new small baseline method to select individual single-look pixels that behave coherently in time, so that isolated stable pixels may be found. After correction for various error terms, the phase values of the selected pixels are unwrapped using a new three-dimensional algorithm. We apply our small baseline method to an area in southern Iceland that includes Katla and Eyjafjallajökull volcanoes, and retrieve a time series of deformation that shows transient deformation due to intrusion of magma beneath Eyjafjallajökull. We also process the data using the Stanford method for persistent scatterers (StaMPS) for comparison.
Detection and estimation of defects in a circular plate using operational deflection shapes
NASA Astrophysics Data System (ADS)
Pai, Perngjin F.; Oh, Yunje; Kim, Byeong-Seok
2002-06-01
This paper investigates dynamic characteristics (mode shapes and natural frequencies) and defect detection of circular plates using a scanning laser vibrometer. Exact dynamic characteristics of a circular aluminum plate having a clamped inner rim and a free outer rim are obtained using two methods; one uses Bessel functions and the other uses a multiple shooting method. An in-house finite element code GESA is also used to analyze the circular plate using the DKT plate element. Numerical results show that some reports in the literature are incorrect and that high-frequency Operational Deflection Shapes (ODSs) are needed in order to locate small defects. Detection of two defects in the circular aluminum plate is experimentally studied using the distributions of RMS velocities under broadband periodic chirp excitations. RMS velocities of ODSs, symmetry breaking of ODSs, splitting of natural frequencies and ODSs, and a Boundary Effect Detection (BED) method. The BED method is non-destructive and model-independent; it processes experimental ODSs to reveal extra local boundary effects caused by defects to reveal locations of defects. Experimental results show that small defects in circular plates can be pinpointed by these approaches. Moreover, a new concept of using the balance of elastic and kinetic energies within a mode cell for detecting defects in two- dimensional structures of irregular shapes is proposed.
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
Wassermann, Anne Mai; Lounkine, Eugen; Glick, Meir
2013-03-25
Virtual screening using bioactivity profiles has become an integral part of currently applied hit finding methods in pharmaceutical industry. However, a significant drawback of this approach is that it is only applicable to compounds that have been biologically tested in the past and have sufficient activity annotations for meaningful profile comparisons. Although bioactivity data generated in pharmaceutical institutions are growing on an unprecedented scale, the number of biologically annotated compounds still covers only a minuscule fraction of chemical space. For a newly synthesized compound or an isolated natural product to be biologically characterized across multiple assays, it may take a considerable amount of time. Consequently, this chemical matter will not be included in virtual screening campaigns based on bioactivity profiles. To overcome this problem, we herein introduce bioturbo similarity searching that uses chemical similarity to map molecules without biological annotations into bioactivity space and then searches for biologically similar compounds in this reference system. In benchmark calculations on primary screening data, we demonstrate that our approach generally achieves higher hit rates and identifies structurally more diverse compounds than approaches using chemical information only. Furthermore, our method is able to discover hits with novel modes of inhibition that traditional 2D and 3D similarity approaches are unlikely to discover. Test calculations on a set of natural products reveal the practical utility of the approach for identifying novel and synthetically more accessible chemical matter.
IR-IR Conformation Specific Spectroscopy of Na +(Glucose) Adducts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.
Here in this paper we report an IR-IR double resonance study of the structural landscape present in the Na +(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na +(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctivemore » Na+ coordination.« less
IR-IR Conformation Specific Spectroscopy of Na +(Glucose) Adducts
Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.; ...
2017-09-27
Here in this paper we report an IR-IR double resonance study of the structural landscape present in the Na +(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na +(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctivemore » Na+ coordination.« less
A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem
2013-02-01
obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-01-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405
Irwin, Gareth; Kerwin, David G; Williams, Genevieve; Van Emmerik, Richard E A; Newell, Karl M; Hamill, Joseph
2018-06-18
A case study visualisation approach to examining the coordination and variability of multiple interacting segments is presented using a whole-body gymnastic skill as the task example. One elite male gymnast performed 10 trials of 10 longswings whilst three-dimensional locations of joint centres were tracked using a motion analysis system. Segment angles were used to define coupling between the arms and trunk, trunk and thighs and thighs and shanks. Rectified continuous relative phase profiles for each interacting couple for 80 longswings were produced. Graphical representations of coordination couplings are presented that include the traditional single coupling, followed by the relational dynamics of two couplings and finally three couplings simultaneously plotted. This method highlights the power of visualisation of movement dynamics and identifies properties of the global interacting segmental couplings that a more formal analysis may not reveal. Visualisation precedes and informs the appropriate qualitative and quantitative analysis of the dynamics.
Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio
2015-06-01
Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.
ERIC Educational Resources Information Center
Durston, Sarah; Konrad, Kerstin
2007-01-01
This paper aims to illustrate how combining multiple approaches can inform us about the neurobiology of ADHD. Converging evidence from genetic, psychopharmacological and functional neuroimaging studies has implicated dopaminergic fronto-striatal circuitry in ADHD. However, while the observation of converging evidence from multiple vantage points…
University students' achievement goals and approaches to learning in mathematics.
Cano, Francisco; Berbén, A B G
2009-03-01
Achievement goals (AG) and students' approaches to learning (SAL) are two research perspectives on student motivation and learning in higher education that have until now been pursued quite independently. This study sets out: (a) to explore the relationship between the most representative variables of SAL and AG; (b) to identify subgroups (clusters) of students with multiple AG; and (c) to examine the differences between these clusters with respect to various SAL and AG characteristics. The participants were 680 male and female 1st year university students studying different subjects (e.g. mathematics, physics, economics) but all enrolled on mathematics courses (e.g. algebra, calculus). Participants completed a series of questionnaires that measured their conceptions of mathematics, approaches to learning, course experience, personal 2 x 2 AG, and perceived AG. SAL and AG variables were moderately associated and related to both the way students perceived their academic environment and the way they conceived of the nature of mathematics (i.e. the perceptual-cognitive framework). Four clusters of students with distinctive multiple AG were identified and when the differences between clusters were analysed, we were able to attribute them to various constructs including perceptual-cognitive framework, learning approaches, and academic performance. This study reveals a consistent pattern of relationships between SAL and AG perspectives across different methods of analysis, supports the relevance of the 2 x 2 AG framework in a mathematics learning context and suggests that AG and SAL may be intertwined aspects of students' experience of learning mathematics at university.
A Comprehensive Strategy for Accurate Mutation Detection of the Highly Homologous PMS2.
Li, Jianli; Dai, Hongzheng; Feng, Yanming; Tang, Jia; Chen, Stella; Tian, Xia; Gorman, Elizabeth; Schmitt, Eric S; Hansen, Terah A A; Wang, Jing; Plon, Sharon E; Zhang, Victor Wei; Wong, Lee-Jun C
2015-09-01
Germline mutations in the DNA mismatch repair gene PMS2 underlie the cancer susceptibility syndrome, Lynch syndrome. However, accurate molecular testing of PMS2 is complicated by a large number of highly homologous sequences. To establish a comprehensive approach for mutation detection of PMS2, we have designed a strategy combining targeted capture next-generation sequencing (NGS), multiplex ligation-dependent probe amplification, and long-range PCR followed by NGS to simultaneously detect point mutations and copy number changes of PMS2. Exonic deletions (E2 to E9, E5 to E9, E8, E10, E14, and E1 to E15), duplications (E11 to E12), and a nonsense mutation, p.S22*, were identified. Traditional multiplex ligation-dependent probe amplification and Sanger sequencing approaches cannot differentiate the origin of the exonic deletions in the 3' region when PMS2 and PMS2CL share identical sequences as a result of gene conversion. Our approach allows unambiguous identification of mutations in the active gene with a straightforward long-range-PCR/NGS method. Breakpoint analysis of multiple samples revealed that recurrent exon 14 deletions are mediated by homologous Alu sequences. Our comprehensive approach provides a reliable tool for accurate molecular analysis of genes containing multiple copies of highly homologous sequences and should improve PMS2 molecular analysis for patients with Lynch syndrome. Copyright © 2015 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
2012-01-01
High-dimensional gene expression data provide a rich source of information because they capture the expression level of genes in dynamic states that reflect the biological functioning of a cell. For this reason, such data are suitable to reveal systems related properties inside a cell, e.g., in order to elucidate molecular mechanisms of complex diseases like breast or prostate cancer. However, this is not only strongly dependent on the sample size and the correlation structure of a data set, but also on the statistical hypotheses tested. Many different approaches have been developed over the years to analyze gene expression data to (I) identify changes in single genes, (II) identify changes in gene sets or pathways, and (III) identify changes in the correlation structure in pathways. In this paper, we review statistical methods for all three types of approaches, including subtypes, in the context of cancer data and provide links to software implementations and tools and address also the general problem of multiple hypotheses testing. Further, we provide recommendations for the selection of such analysis methods. Reviewers This article was reviewed by Arcady Mushegian, Byung-Soo Kim and Joel Bader. PMID:23227854
Global high-frequency source imaging accounting for complexity in Green's functions
NASA Astrophysics Data System (ADS)
Lambert, V.; Zhan, Z.
2017-12-01
The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.
Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P
2015-09-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.
Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.
2015-01-01
Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814
Bolin, Jocelyn H; Edwards, Julianne M; Finch, W Holmes; Cassady, Jerrell C
2014-01-01
Although traditional clustering methods (e.g., K-means) have been shown to be useful in the social sciences it is often difficult for such methods to handle situations where clusters in the population overlap or are ambiguous. Fuzzy clustering, a method already recognized in many disciplines, provides a more flexible alternative to these traditional clustering methods. Fuzzy clustering differs from other traditional clustering methods in that it allows for a case to belong to multiple clusters simultaneously. Unfortunately, fuzzy clustering techniques remain relatively unused in the social and behavioral sciences. The purpose of this paper is to introduce fuzzy clustering to these audiences who are currently relatively unfamiliar with the technique. In order to demonstrate the advantages associated with this method, cluster solutions of a common perfectionism measure were created using both fuzzy clustering and K-means clustering, and the results compared. Results of these analyses reveal that different cluster solutions are found by the two methods, and the similarity between the different clustering solutions depends on the amount of cluster overlap allowed for in fuzzy clustering.
Bolin, Jocelyn H.; Edwards, Julianne M.; Finch, W. Holmes; Cassady, Jerrell C.
2014-01-01
Although traditional clustering methods (e.g., K-means) have been shown to be useful in the social sciences it is often difficult for such methods to handle situations where clusters in the population overlap or are ambiguous. Fuzzy clustering, a method already recognized in many disciplines, provides a more flexible alternative to these traditional clustering methods. Fuzzy clustering differs from other traditional clustering methods in that it allows for a case to belong to multiple clusters simultaneously. Unfortunately, fuzzy clustering techniques remain relatively unused in the social and behavioral sciences. The purpose of this paper is to introduce fuzzy clustering to these audiences who are currently relatively unfamiliar with the technique. In order to demonstrate the advantages associated with this method, cluster solutions of a common perfectionism measure were created using both fuzzy clustering and K-means clustering, and the results compared. Results of these analyses reveal that different cluster solutions are found by the two methods, and the similarity between the different clustering solutions depends on the amount of cluster overlap allowed for in fuzzy clustering. PMID:24795683
A formal concept analysis approach to consensus clustering of multi-experiment expression data
2014-01-01
Background Presently, with the increasing number and complexity of available gene expression datasets, the combination of data from multiple microarray studies addressing a similar biological question is gaining importance. The analysis and integration of multiple datasets are expected to yield more reliable and robust results since they are based on a larger number of samples and the effects of the individual study-specific biases are diminished. This is supported by recent studies suggesting that important biological signals are often preserved or enhanced by multiple experiments. An approach to combining data from different experiments is the aggregation of their clusterings into a consensus or representative clustering solution which increases the confidence in the common features of all the datasets and reveals the important differences among them. Results We propose a novel generic consensus clustering technique that applies Formal Concept Analysis (FCA) approach for the consolidation and analysis of clustering solutions derived from several microarray datasets. These datasets are initially divided into groups of related experiments with respect to a predefined criterion. Subsequently, a consensus clustering algorithm is applied to each group resulting in a clustering solution per group. These solutions are pooled together and further analysed by employing FCA which allows extracting valuable insights from the data and generating a gene partition over all the experiments. In order to validate the FCA-enhanced approach two consensus clustering algorithms are adapted to incorporate the FCA analysis. Their performance is evaluated on gene expression data from multi-experiment study examining the global cell-cycle control of fission yeast. The FCA results derived from both methods demonstrate that, although both algorithms optimize different clustering characteristics, FCA is able to overcome and diminish these differences and preserve some relevant biological signals. Conclusions The proposed FCA-enhanced consensus clustering technique is a general approach to the combination of clustering algorithms with FCA for deriving clustering solutions from multiple gene expression matrices. The experimental results presented herein demonstrate that it is a robust data integration technique able to produce good quality clustering solution that is representative for the whole set of expression matrices. PMID:24885407
A new multiple air beam approach for in-process form error optical measurement
NASA Astrophysics Data System (ADS)
Gao, Y.; Li, R.
2018-07-01
In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.
Simultaneous nano-tracking of multiple motor proteins via spectral discrimination of quantum dots.
Kakizuka, Taishi; Ikezaki, Keigo; Kaneshiro, Junichi; Fujita, Hideaki; Watanabe, Tomonobu M; Ichimura, Taro
2016-07-01
Simultaneous nanometric tracking of multiple motor proteins was achieved by combining multicolor fluorescent labeling of target proteins and imaging spectroscopy, revealing dynamic behaviors of multiple motor proteins at the sub-diffraction-limit scale. Using quantum dot probes of distinct colors, we experimentally verified the localization precision to be a few nanometers at temporal resolution of 30 ms or faster. One-dimensional processive movement of two heads of a single myosin molecule and multiple myosin molecules was successfully traced. Furthermore, the system was modified for two-dimensional measurement and applied to tracking of multiple myosin molecules. Our approach is useful for investigating cooperative movement of proteins in supramolecular nanomachinery.
Simultaneous nano-tracking of multiple motor proteins via spectral discrimination of quantum dots
Kakizuka, Taishi; Ikezaki, Keigo; Kaneshiro, Junichi; Fujita, Hideaki; Watanabe, Tomonobu M.; Ichimura, Taro
2016-01-01
Simultaneous nanometric tracking of multiple motor proteins was achieved by combining multicolor fluorescent labeling of target proteins and imaging spectroscopy, revealing dynamic behaviors of multiple motor proteins at the sub-diffraction-limit scale. Using quantum dot probes of distinct colors, we experimentally verified the localization precision to be a few nanometers at temporal resolution of 30 ms or faster. One-dimensional processive movement of two heads of a single myosin molecule and multiple myosin molecules was successfully traced. Furthermore, the system was modified for two-dimensional measurement and applied to tracking of multiple myosin molecules. Our approach is useful for investigating cooperative movement of proteins in supramolecular nanomachinery. PMID:27446684
Secure Multiparty Quantum Computation for Summation and Multiplication.
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-21
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics.
Secure Multiparty Quantum Computation for Summation and Multiplication
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
As a fundamental primitive, Secure Multiparty Summation and Multiplication can be used to build complex secure protocols for other multiparty computations, specially, numerical computations. However, there is still lack of systematical and efficient quantum methods to compute Secure Multiparty Summation and Multiplication. In this paper, we present a novel and efficient quantum approach to securely compute the summation and multiplication of multiparty private inputs, respectively. Compared to classical solutions, our proposed approach can ensure the unconditional security and the perfect privacy protection based on the physical principle of quantum mechanics. PMID:26792197
A data-driven multiplicative fault diagnosis approach for automation processes.
Hao, Haiyang; Zhang, Kai; Ding, Steven X; Chen, Zhiwen; Lei, Yaguo
2014-09-01
This paper presents a new data-driven method for diagnosing multiplicative key performance degradation in automation processes. Different from the well-established additive fault diagnosis approaches, the proposed method aims at identifying those low-level components which increase the variability of process variables and cause performance degradation. Based on process data, features of multiplicative fault are extracted. To identify the root cause, the impact of fault on each process variable is evaluated in the sense of contribution to performance degradation. Then, a numerical example is used to illustrate the functionalities of the method and Monte-Carlo simulation is performed to demonstrate the effectiveness from the statistical viewpoint. Finally, to show the practical applicability, a case study on the Tennessee Eastman process is presented. Copyright © 2013. Published by Elsevier Ltd.
Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation
NASA Technical Reports Server (NTRS)
Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy
2001-01-01
Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.
Choi, Ted; Eskin, Eleazar
2013-01-01
Gene expression data, in conjunction with information on genetic variants, have enabled studies to identify expression quantitative trait loci (eQTLs) or polymorphic locations in the genome that are associated with expression levels. Moreover, recent technological developments and cost decreases have further enabled studies to collect expression data in multiple tissues. One advantage of multiple tissue datasets is that studies can combine results from different tissues to identify eQTLs more accurately than examining each tissue separately. The idea of aggregating results of multiple tissues is closely related to the idea of meta-analysis which aggregates results of multiple genome-wide association studies to improve the power to detect associations. In principle, meta-analysis methods can be used to combine results from multiple tissues. However, eQTLs may have effects in only a single tissue, in all tissues, or in a subset of tissues with possibly different effect sizes. This heterogeneity in terms of effects across multiple tissues presents a key challenge to detect eQTLs. In this paper, we develop a framework that leverages two popular meta-analysis methods that address effect size heterogeneity to detect eQTLs across multiple tissues. We show by using simulations and multiple tissue data from mouse that our approach detects many eQTLs undetected by traditional eQTL methods. Additionally, our method provides an interpretation framework that accurately predicts whether an eQTL has an effect in a particular tissue. PMID:23785294
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Kim, Sungjin; Jinich, Adrián; Aspuru-Guzik, Alán
2017-04-24
We propose a multiple descriptor multiple kernel (MultiDK) method for efficient molecular discovery using machine learning. We show that the MultiDK method improves both the speed and accuracy of molecular property prediction. We apply the method to the discovery of electrolyte molecules for aqueous redox flow batteries. Using multiple-type-as opposed to single-type-descriptors, we obtain more relevant features for machine learning. Following the principle of "wisdom of the crowds", the combination of multiple-type descriptors significantly boosts prediction performance. Moreover, by employing multiple kernels-more than one kernel function for a set of the input descriptors-MultiDK exploits nonlinear relations between molecular structure and properties better than a linear regression approach. The multiple kernels consist of a Tanimoto similarity kernel and a linear kernel for a set of binary descriptors and a set of nonbinary descriptors, respectively. Using MultiDK, we achieve an average performance of r 2 = 0.92 with a test set of molecules for solubility prediction. We also extend MultiDK to predict pH-dependent solubility and apply it to a set of quinone molecules with different ionizable functional groups to assess their performance as flow battery electrolytes.
Preuner, Sandra; Barna, Agnes; Frommlet, Florian; Czurda, Stefan; Konstantin, Byrgazov; Alikian, Mary; Machova Polakova, Katerina; Sacha, Tomasz; Richter, Johan; Lion, Thomas; Gabriel, Christian
2016-01-01
Identification and quantitative monitoring of mutant BCR-ABL1 subclones displaying resistance to tyrosine kinase inhibitors (TKIs) have become important tasks in patients with Ph-positive leukemias. Different technologies have been established for patient screening. Various next-generation sequencing (NGS) platforms facilitating sensitive detection and quantitative monitoring of mutations in the ABL1-kinase domain (KD) have been introduced recently, and are expected to become the preferred technology in the future. However, broad clinical implementation of NGS methods has been hampered by the limited accessibility at different centers and the current costs of analysis which may not be regarded as readily affordable for routine diagnostic monitoring. It is therefore of interest to determine whether NGS platforms can be adequately substituted by other methodological approaches. We have tested three different techniques including pyrosequencing, LD (ligation-dependent)-PCR and NGS in a series of peripheral blood specimens from chronic myeloid leukemia (CML) patients carrying single or multiple mutations in the BCR-ABL1 KD. The proliferation kinetics of mutant subclones in serial specimens obtained during the course of TKI-treatment revealed similar profiles via all technical approaches, but individual specimens showed statistically significant differences between NGS and the other methods tested. The observations indicate that different approaches to detection and quantification of mutant subclones may be applicable for the monitoring of clonal kinetics, but careful calibration of each method is required for accurate size assessment of mutant subclones at individual time points. PMID:27136541
Preuner, Sandra; Barna, Agnes; Frommlet, Florian; Czurda, Stefan; Konstantin, Byrgazov; Alikian, Mary; Machova Polakova, Katerina; Sacha, Tomasz; Richter, Johan; Lion, Thomas; Gabriel, Christian
2016-04-29
Identification and quantitative monitoring of mutant BCR-ABL1 subclones displaying resistance to tyrosine kinase inhibitors (TKIs) have become important tasks in patients with Ph-positive leukemias. Different technologies have been established for patient screening. Various next-generation sequencing (NGS) platforms facilitating sensitive detection and quantitative monitoring of mutations in the ABL1-kinase domain (KD) have been introduced recently, and are expected to become the preferred technology in the future. However, broad clinical implementation of NGS methods has been hampered by the limited accessibility at different centers and the current costs of analysis which may not be regarded as readily affordable for routine diagnostic monitoring. It is therefore of interest to determine whether NGS platforms can be adequately substituted by other methodological approaches. We have tested three different techniques including pyrosequencing, LD (ligation-dependent)-PCR and NGS in a series of peripheral blood specimens from chronic myeloid leukemia (CML) patients carrying single or multiple mutations in the BCR-ABL1 KD. The proliferation kinetics of mutant subclones in serial specimens obtained during the course of TKI-treatment revealed similar profiles via all technical approaches, but individual specimens showed statistically significant differences between NGS and the other methods tested. The observations indicate that different approaches to detection and quantification of mutant subclones may be applicable for the monitoring of clonal kinetics, but careful calibration of each method is required for accurate size assessment of mutant subclones at individual time points.
Monson, Daniel H.; Bowen, Lizabeth
2015-01-01
Overall, a variety of indices used to measure population status throughout the sea otter’s range have provided insights for understanding the mechanisms driving the trajectory of various sea otter populations, which a single index could not, and we suggest using multiple methods to measure a population’s status at multiple spatial and temporal scales. The work described here also illustrates the usefulness of long-term data sets and/or approaches that can be used to assess population status retrospectively, providing information otherwise not available. While not all systems will be as amenable to using all the approaches presented here, we expect innovative researchers could adapt analogous multi-scale methods to a broad range of habitats and species including apex predators occupying the top trophic levels, which are often of conservation concern.
Teaching Electric Circuits with Multiple Batteries: A Qualitative Approach
ERIC Educational Resources Information Center
Smith, David P.; van Kampen, Paul
2011-01-01
We have investigated preservice science teachers' qualitative understanding of circuits consisting of multiple batteries in single and multiple loops using a pretest and post-test method and classroom observations. We found that most students were unable to explain the effects of adding batteries in single and multiple loops, as they tended to use…
Clegg, Paul S; Tavacoli, Joe W; Wilde, Pete J
2016-01-28
Multiple emulsions have great potential for application in food science as a means to reduce fat content or for controlled encapsulation and release of actives. However, neither production nor stability is straightforward. Typically, multiple emulsions are prepared via two emulsification steps and a variety of approaches have been deployed to give long-term stability. It is well known that multiple emulsions can be prepared in a single step by harnessing emulsion inversion, although the resulting emulsions are usually short lived. Recently, several contrasting methods have been demonstrated which give rise to stable multiple emulsions via one-step production processes. Here we review the current state of microfluidic, polymer-stabilized and particle-stabilized approaches; these rely on phase separation, the role of electrolyte and the trapping of solvent with particles respectively.
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
Interactions Dominate the Dynamics of Visual Cognition
Stephen, Damian G.; Mirman, Daniel
2010-01-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. PMID:20070957
Reconfigurable Flight Control Designs With Application to the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Burken, John J.; Lu, Ping; Wu, Zhenglu
1999-01-01
Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the right body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Identity, Intersectionality, and Mixed-Methods Approaches
ERIC Educational Resources Information Center
Harper, Casandra E.
2011-01-01
In this article, the author argues that current strategies to study and understand students' identities fall short of fully capturing their complexity. A multi-dimensional perspective and a mixed-methods approach can reveal nuance that is missed with current approaches. The author offers an illustration of how mixed-methods research can promote a…
van Duijl, Marjolein; Kleijn, Wim; de Jong, Joop
2013-09-01
As in many cultures, spirit possession is a common idiom of distress in Uganda. The DSM-IV contains experimental research criteria for dissociative and possession trance disorder (DTD and PTD), which are under review for the DSM-5. In the current proposed categories of the DSM-5, PTD is subsumed under dissociative identity disorder (DID) and DTD under dissociative disorders not elsewhere classified. Evaluation of these criteria is currently urgently required. This study explores the match between local symptoms of spirit possession in Uganda and experimental research criteria for PTD in the DSM-IV and proposed criteria for DID in the DSM-5. A mixed-method approach was used combining qualitative and quantitative research methods. Local symptoms were explored of 119 spirit possessed patients, using illness narratives and a cultural dissociative symptoms' checklist. Possible meaningful clusters of symptoms were inventoried through multiple correspondence analysis. Finally, local symptoms were compared with experimental criteria for PTD in the DSM-IV and proposed criteria for DID in the DSM-5. Illness narratives revealed different phases of spirit possession, with passive-influence experiences preceding the actual possession states. Multiple correspondence analysis of symptoms revealed two dimensions: 'passive' and 'active' symptoms. Local symptoms, such as changes in consciousness, shaking movements, and talking in a voice attributed to spirits, match with DSM-IV-PTD and DSM-5-DID criteria. Passive-influence experiences, such as feeling influenced or held by powers from outside, strange dreams, and hearing voices, deserve to be more explicitly described in the proposed criteria for DID in the DSM-5. The suggested incorporation of PTD in DID in the DSM-5 and the envisioned separation of DTD and PTD in two distinctive categories have disputable aspects.
Modeling Incorrect Responses to Multiple-Choice Items with Multilinear Formula Score Theory.
ERIC Educational Resources Information Center
Drasgow, Fritz; And Others
This paper addresses the information revealed in incorrect option selection on multiple choice items. Multilinear Formula Scoring (MFS), a theory providing methods for solving psychological measurement problems of long standing, is first used to estimate option characteristic curves for the Armed Services Vocational Aptitude Battery Arithmetic…
High School Students' Concepts of Acids and Bases.
ERIC Educational Resources Information Center
Ross, Bertram H. B.
An investigation of Ontario high school students' understanding of acids and bases with quantitative and qualitative methods revealed misconceptions. A concept map, based on the objectives of the Chemistry Curriculum Guideline, generated multiple-choice items and interview questions. The multiple-choice test was administered to 34 grade 12…
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants
Broadaway, K. Alaine; Cutler, David J.; Duncan, Richard; Moore, Jacob L.; Ware, Erin B.; Jhun, Min A.; Bielak, Lawrence F.; Zhao, Wei; Smith, Jennifer A.; Peyser, Patricia A.; Kardia, Sharon L.R.; Ghosh, Debashis; Epstein, Michael P.
2016-01-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
Curcumin therapeutic promises and bioavailability in colorectal cancer.
Shehzad, A; Khan, S; Shehzad, O; Lee, Y S
2010-07-01
Curcumin, a polyphenol and derivative of turmeric is one of the most commonly used and highly researched phytochemicals. Several research studies have provided interesting insights into the multiple mechanisms by which curcumin may mediate chemotherapy and chemopreventive effects on cancers, including colorectal cancer. Curcumin has the ability to inhibit carcinogenic promotion of colorectal cancer through the modulation of multiple molecular targets such as transcription factors, enzymes, cell cycle proteins, cell surface adhesion proteins, survival pathways and cytokines. A number of clinical trials dealing with curcumin's efficacy and safety revealed poor absorption and low bioavailability. Different factors contributing to the low bioavailability include low plasma level, tissue distribution, rapid metabolism and elimination from the body. Although, curcumin poor absorption and low systemic bioavailability limit its translation into clinics, some of the methods for its use can be approached to enhance the absorption and achieve a therapeutic level of curcumin. Recent clinical trials suggest a potential role for curcumin in regards to colorectal cancer therapy.
Rakotonarivo, S T; Walker, S C; Kuperman, W A; Roux, P
2011-12-01
A method to actively localize a small perturbation in a multiple scattering medium using a collection of remote acoustic sensors is presented. The approach requires only minimal modeling and no knowledge of the scatterer distribution and properties of the scattering medium and the perturbation. The medium is ensonified before and after a perturbation is introduced. The coherent difference between the measured signals then reveals all field components that have interacted with the perturbation. A simple single scatter filter (that ignores the presence of the medium scatterers) is matched to the earliest change of the coherent difference to localize the perturbation. Using a multi-source/receiver laboratory setup in air, the technique has been successfully tested with experimental data at frequencies varying from 30 to 60 kHz (wavelength ranging from 0.5 to 1 cm) for cm-scale scatterers in a scattering medium with a size two to five times bigger than its transport mean free path. © 2011 Acoustical Society of America
Multiple Spacecraft Study of the Impact of Turbulence on Reconnection Rates
NASA Technical Reports Server (NTRS)
Wendel, Deirdre; Goldstein, Melvyn; Figueroa-Vinas, Adolfo; Adrian, Mark; Sahraoui, Fouad
2011-01-01
Magnetic turbulence and secondary island formation have reemerged as possible explanations for fast reconnection. Recent three-dimensional simulations reveal the formation of secondary islands that serve to shorten the current sheet and increase the accelerating electric field, while both simulations and observations witness electron holes whose collapse energizes electrons. However, few data studies have explicitly investigated the effect of turbulence and islands on the reconnection rate. We present a more comprehensive analysis of the effect of turbulence and islands on reconnection rates observed in space. Our approach takes advantage of multiple spacecraft to find the location of the spacecraft relative to the inflow and the outflow, to estimate the reconnection electric field, to indicate the presence and size of islands, and to determine wave vectors indicating turbulence. A superposed epoch analysis provides independent estimates of spatial scales and a reconnection electric field. We apply k-filtering and a new method adopted from seismological analyses to identify the wavevectors. From several case studies of reconnection events, we obtain preliminary estimates of the spectral scaling law, identify wave modes, and present a method for finding the reconnection electric field associated with the wave modes.
Higton, D M
2001-01-01
An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
Weinberg, W A; McLean, A; Snider, R L; Rintelmann, J W; Brumback, R A
1989-12-01
Eight groups of learning disabled children (N = 100), categorized by the clinical Lexical Paradigm as good readers or poor readers, were individually administered the Gilmore Oral Reading Test, Form D, by one of four input/retrieval methods: (1) the standardized method of administration in which the child reads each paragraph aloud and then answers five questions relating to the paragraph [read/recall method]; (2) the child reads each paragraph aloud and then for each question selects the correct answer from among three choices read by the examiner [read/choice method]; (3) the examiner reads each paragraph aloud and reads each of the five questions to the child to answer [listen/recall method]; and (4) the examiner reads each paragraph aloud and then for each question reads three multiple-choice answers from which the child selects the correct answer [listen/choice method]. The major difference in scores was between the groups tested by the recall versus the orally read multiple-choice methods. This study indicated that poor readers who listened to the material and were tested by orally read multiple-choice format could perform as well as good readers. The performance of good readers was not affected by listening or by the method of testing. The multiple-choice testing improved the performance of poor readers independent of the input method. This supports the arguments made previously that a "bypass approach" to education of poor readers in which testing is accomplished using an orally read multiple-choice format can enhance the child's school performance on reading-related tasks. Using a listening while reading input method may further enhance performance.
Improved determination of particulate absorption from combined filter pad and PSICAM measurements.
Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David
2016-10-31
Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.
Considerations of multiple imputation approaches for handling missing data in clinical trials.
Quan, Hui; Qi, Li; Luo, Xiaodong; Darchy, Loic
2018-07-01
Missing data exist in all clinical trials and missing data issue is a very serious issue in terms of the interpretability of the trial results. There is no universally applicable solution for all missing data problems. Methods used for handling missing data issue depend on the circumstances particularly the assumptions on missing data mechanisms. In recent years, if the missing at random mechanism cannot be assumed, conservative approaches such as the control-based and returning to baseline multiple imputation approaches are applied for dealing with the missing data issues. In this paper, we focus on the variability in data analysis of these approaches. As demonstrated by examples, the choice of the variability can impact the conclusion of the analysis. Besides the methods for continuous endpoints, we also discuss methods for binary and time to event endpoints as well as consideration for non-inferiority assessment. Copyright © 2018. Published by Elsevier Inc.
Cryptococcosis, silicosis, and tuberculous pseudotumor in the same pulmonary lobe*
da Silva, Geruza Alves; Brandão, Daniel Ferracioli; Vianna, Elcio Oliveira; de Sá, João Batista Carlos; Baddini-Martinez, José
2013-01-01
Tuberculosis and cryptococcosis are infectious diseases that can result in the formation of single or multiple nodules in immunocompetent patients. Exposure to silica is known to raise the risk of infection with Mycobacterium tuberculosis. We report the case of an elderly man with no history of opportunistic infections and no clinical evidence of immunodeficiency but with a six-month history of dry cough and nocturnal wheezing. A chest X-ray revealed a mass measuring 5.0 × 3.5 cm in the right upper lobe. The diagnostic approach of the mass revealed tuberculosis. The histopathological analysis of the surrounding parenchyma reveled silicosis and cryptococcosis. Cryptococcosis was also found in masses identified in the mediastinal lymph nodes. The surgical approach was indicated because of the degree of pleuropulmonary involvement, the inconclusive results obtained with the invasive and noninvasive methods applied, and the possibility of malignancy. This case illustrates the difficulty inherent to the assessment of infectious or inflammatory pulmonary pseudotumors, the differential diagnosis of which occasionally requires a radical surgical approach. Despite the presence of respiratory symptoms for six months, the first chest X-ray was performed only at the end of that period. We discuss the possible pathogenic mechanisms that might have led to the combination of three types of granulomatous lesions in the same lobe, and we emphasize the need for greater awareness of atypical presentations of pulmonary tuberculosis. PMID:24310636
An omnibus test for family-based association studies with multiple SNPs and multiple phenotypes.
Lasky-Su, Jessica; Murphy, Amy; McQueen, Matthew B; Weiss, Scott; Lange, Christoph
2010-06-01
We propose an omnibus family-based association test (MFBAT) that can be applied to multiple markers and multiple phenotypes and that has only one degree of freedom. The proposed test statistic extends current FBAT methodology to incorporate multiple markers as well as multiple phenotypes. Using simulation studies, power estimates for the proposed methodology are compared with the standard methodologies. On the basis of these simulations, we find that MFBAT substantially outperforms other methods, including haplotypic approaches and doing multiple tests with single single-nucleotide polymorphisms (SNPs) and single phenotypes. The practical relevance of the approach is illustrated by an application to asthma in which SNP/phenotype combinations are identified and reach overall significance that would not have been identified using other approaches. This methodology is directly applicable to cases in which there are multiple SNPs, such as candidate gene studies, cases in which there are multiple phenotypes, such as expression data, and cases in which there are multiple phenotypes and genotypes, such as genome-wide association studies that incorporate expression profiles as phenotypes. This program is available in the PBAT analysis package.
Srivastava, Mousami; Khurana, Pankaj; Sugadev, Ragumani
2012-11-02
The tissue-specific Unigene Sets derived from more than one million expressed sequence tags (ESTs) in the NCBI, GenBank database offers a platform for identifying significantly and differentially expressed tissue-specific genes by in-silico methods. Digital differential display (DDD) rapidly creates transcription profiles based on EST comparisons and numerically calculates, as a fraction of the pool of ESTs, the relative sequence abundance of known and novel genes. However, the process of identifying the most likely tissue for a specific disease in which to search for candidate genes from the pool of differentially expressed genes remains difficult. Therefore, we have used 'Gene Ontology semantic similarity score' to measure the GO similarity between gene products of lung tissue-specific candidate genes from control (normal) and disease (cancer) sets. This semantic similarity score matrix based on hierarchical clustering represents in the form of a dendrogram. The dendrogram cluster stability was assessed by multiple bootstrapping. Multiple bootstrapping also computes a p-value for each cluster and corrects the bias of the bootstrap probability. Subsequent hierarchical clustering by the multiple bootstrapping method (α = 0.95) identified seven clusters. The comparative, as well as subtractive, approach revealed a set of 38 biomarkers comprising four distinct lung cancer signature biomarker clusters (panel 1-4). Further gene enrichment analysis of the four panels revealed that each panel represents a set of lung cancer linked metastasis diagnostic biomarkers (panel 1), chemotherapy/drug resistance biomarkers (panel 2), hypoxia regulated biomarkers (panel 3) and lung extra cellular matrix biomarkers (panel 4). Expression analysis reveals that hypoxia induced lung cancer related biomarkers (panel 3), HIF and its modulating proteins (TGM2, CSNK1A1, CTNNA1, NAMPT/Visfatin, TNFRSF1A, ETS1, SRC-1, FN1, APLP2, DMBT1/SAG, AIB1 and AZIN1) are significantly down regulated. All down regulated genes in this panel were highly up regulated in most other types of cancers. These panels of proteins may represent signature biomarkers for lung cancer and will aid in lung cancer diagnosis and disease monitoring as well as in the prediction of responses to therapeutics.
Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?
ERIC Educational Resources Information Center
Xu, Yanbo; Mostow, Jack
2012-01-01
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
Multiview echocardiography fusion using an electromagnetic tracking system.
Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; Paakkanen, Riitta; Khan, Nehan; Noga, Michelle; Boulanger, Pierre; Becher, Harald
2016-08-01
Three-dimensional ultrasound is an emerging modality for the assessment of complex cardiac anatomy and function. The advantages of this modality include lack of ionizing radiation, portability, low cost, and high temporal resolution. Major limitations include limited field-of-view, reliance on frequently limited acoustic windows, and poor signal to noise ratio. This study proposes a novel approach to combine multiple views into a single image using an electromagnetic tracking system in order to improve the field-of-view. The novel method has several advantages: 1) it does not rely on image information for alignment, and therefore, the method does not require image overlap; 2) the alignment accuracy of the proposed approach is not affected by any poor image quality as in the case of image registration based approaches; 3) in contrast to previous optical tracking based system, the proposed approach does not suffer from line-of-sight limitation; and 4) it does not require any initial calibration. In this pilot project, we were able to show that using a heart phantom, our method can fuse multiple echocardiographic images and improve the field-of view. Quantitative evaluations showed that the proposed method yielded a nearly optimal alignment of image data sets in three-dimensional space. The proposed method demonstrates the electromagnetic system can be used for the fusion of multiple echocardiography images with a seamless integration of sensors to the transducer.
Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams
NASA Technical Reports Server (NTRS)
Lipatov, Alexander S.
2011-01-01
We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.
NASA Astrophysics Data System (ADS)
Mazoochi, M.; Pourmina, M. A.; Bakhshi, H.
2015-03-01
The core aim of this work is the maximization of the achievable data rate of the secondary user pairs (SU pairs), while ensuring the QoS of primary users (PUs). All users are assumed to be equipped with multiple antennas. It is assumed that when PUs are present, the direct communications between SU pairs introduces intolerable interference to PUs and thereby SUs transmit signal using the cooperation of other SUs and avoid transmitting in the direct channel. In brief, an adaptive cooperative strategy for multiple-input/multiple-output (MIMO) cognitive radio networks is proposed. At the presence of PUs, the issue of joint relay selection and power allocation in Underlay MIMO Cooperative Cognitive Radio Networks (U-MIMO-CCRN) is addressed. The optimal approach for determining the power allocation and the cooperating SU is proposed. Besides, the outage probability of the proposed communication protocol is further derived. Due to high complexity of the optimal approach, a low-complexity approach is further proposed and its performance is evaluated using simulations. The simulation results reveal that the performance loss due to the low-complexity approach is only about 14%, while the complexity is greatly reduced.
Integrated optics to improve resolution on multiple configuration
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei
2015-04-01
Inspired to in order to reveal the structure to improve imaging resolution, further technical requirement is proposed in some areas of the function and influence on the development of multiple configuration. To breakthrough diffraction limit, smart structures are recommended as the most efficient and economical method, while by used to improve the system performance, especially on signal to noise ratio and resolution. Integrated optics were considered in the selection, with which typical multiple configuration, by use the method of simulation experiment. Methodology can change traditional design concept and to develop the application space. Our calculations using multiple matrix transfer method, also the correlative algorithm and full calculations, show the expected beam shaping through system and, in particular, the experimental results will support our argument, which will be reported in the presentation.
Successive equimarginal approach for optimal design of a pump and treat system
NASA Astrophysics Data System (ADS)
Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.
2007-08-01
An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.
Kan, Zhong-Yuan; Walters, Benjamin T.; Mayne, Leland; Englander, S. Walter
2013-01-01
Hydrogen exchange technology provides a uniquely powerful instrument for measuring protein structural and biophysical properties, quantitatively and in a nonperturbing way, and determining how these properties are implemented to produce protein function. A developing hydrogen exchange–mass spectrometry method (HX MS) is able to analyze large biologically important protein systems while requiring only minuscule amounts of experimental material. The major remaining deficiency of the HX MS method is the inability to deconvolve HX results to individual amino acid residue resolution. To pursue this goal we used an iterative optimization program (HDsite) that integrates recent progress in multiple peptide acquisition together with previously unexamined isotopic envelope-shape information and a site-resolved back-exchange correction. To test this approach, residue-resolved HX rates computed from HX MS data were compared with extensive HX NMR measurements, and analogous comparisons were made in simulation trials. These tests found excellent agreement and revealed the important computational determinants. PMID:24019478
Ye, Y; Zhang, Q; Ren, Y L; Li, J M
2017-05-28
Among the medical bamboo slips unearthed from Han tomb in Laoguanshan of Chengdu, the Zhu bing ( All Diseases ) is a monograph to discuss the characteristics of signs and symptoms. Based on the differences of writing styles, diseases involved, expounding methods and writing rules, the book is divided, by the research team, into 2 parts: All Diseases (1) and All Diseases (2). All Diseases (1) includes over 130 slips, 2 000 characters with totally more than 100 disease names, containing multiple clinical disciplines. The elaborated classification, varied naming methods, grasping the symptom characteristics guided by the four diagnostic approaches, paying attention to the comparison of similar diseases, and dealing with the prognosis and healthcare in this part reflect the holism of correspondence between human body and natural environment, and syndrome differentiation thought of combining disease with symptoms and signs, revealing its academic significance.
Watershed identification of polygonal patterns in noisy SAR images.
Moreels, Pierre; Smrekar, Suzanne E
2003-01-01
This paper describes a new approach to pattern recognition in synthetic aperture radar (SAR) images. A visual analysis of the images provided by NASA's Magellan mission to Venus has revealed a number of zones showing polygonal-shaped faults on the surface of the planet. The goal of the paper is to provide a method to automate the identification of such zones. The high level of noise in SAR images and its multiplicative nature make automated image analysis difficult and conventional edge detectors, like those based on gradient images, inefficient. We present a scheme based on an improved watershed algorithm and a two-scale analysis. The method extracts potential edges in the SAR image, analyzes the patterns obtained, and decides whether or not the image contains a "polygon area". This scheme can also be applied to other SAR or visual images, for instance in observation of Mars and Jupiter's satellite Europa.
NASA Astrophysics Data System (ADS)
Li, Xuesong; Northrop, William F.
2016-04-01
This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.
ERIC Educational Resources Information Center
Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel
2012-01-01
A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…
Simulating Matrix Crack and Delamination Interaction in a Clamped Tapered Beam
NASA Technical Reports Server (NTRS)
De Carvalho, N. V.; Seshadri, B. R.; Ratcliffe, J. G.; Mabson, G. E.; Deobald, L. R.
2017-01-01
Blind predictions were conducted to validate a discrete crack methodology based on the Floating Node Method to simulate matrix-crack/delamination interaction. The main novel aspects of the approach are: (1) the implementation of the floating node method via an 'extended interface element' to represent delaminations, matrix-cracks and their interaction, (2) application of directional cohesive elements to infer overall delamination direction, and (3) use of delamination direction and stress state at the delamination front to determine migration onset. Overall, good agreement was obtained between simulations and experiments. However, the validation exercise revealed the strong dependence of the simulation of matrix-crack/delamination interaction on the strength data (in this case transverse interlaminar strength, YT) used within the cohesive zone approach applied in this work. This strength value, YT, is itself dependent on the test geometry from which the strength measurement is taken. Thus, choosing an appropriate strength value becomes an ad-hoc step. As a consequence, further work is needed to adequately characterize and assess the accuracy and adequacy of cohesive zone approaches to model small crack growth and crack onset. Additionally, often when simulating damage progression with cohesive zone elements, the strength is lowered while keeping the fracture toughness constant to enable the use of coarser meshes. Results from the present study suggest that this approach is not recommended for any problem involving crack initiation, small crack growth or multiple crack interaction.
Revealing time bunching effect in single-molecule enzyme conformational dynamics.
Lu, H Peter
2011-04-21
In this perspective, we focus our discussion on how the single-molecule spectroscopy and statistical analysis are able to reveal enzyme hidden properties, taking the study of T4 lysozyme as an example. Protein conformational fluctuations and dynamics play a crucial role in biomolecular functions, such as in enzymatic reactions. Single-molecule spectroscopy is a powerful approach to analyze protein conformational dynamics under physiological conditions, providing dynamic perspectives on a molecular-level understanding of protein structure-function mechanisms. Using single-molecule fluorescence spectroscopy, we have probed T4 lysozyme conformational motions under the hydrolysis reaction of a polysaccharide of E. coli B cell walls by monitoring the fluorescence resonant energy transfer (FRET) between a donor-acceptor probe pair tethered to T4 lysozyme domains involving open-close hinge-bending motions. Based on the single-molecule spectroscopic results, molecular dynamics simulation, a random walk model analysis, and a novel 2D statistical correlation analysis, we have revealed a time bunching effect in protein conformational motion dynamics that is critical to enzymatic functions. Bunching effect implies that conformational motion times tend to bunch in a finite and narrow time window. We show that convoluted multiple Poisson rate processes give rise to the bunching effect in the enzymatic reaction dynamics. Evidently, the bunching effect is likely common in protein conformational dynamics involving in conformation-gated protein functions. In this perspective, we will also discuss a new approach of 2D regional correlation analysis capable of analyzing fluctuation dynamics of complex multiple correlated and anti-correlated fluctuations under a non-correlated noise background. Using this new method, we are able to map out any defined segments along the fluctuation trajectories and determine whether they are correlated, anti-correlated, or non-correlated; after which, a cross correlation analysis can be applied for each specific segment to obtain a detailed fluctuation dynamics analysis.
Multiple sclerosis lesion segmentation using dictionary learning and sparse coding.
Weiss, Nick; Rueckert, Daniel; Rao, Anil
2013-01-01
The segmentation of lesions in the brain during the development of Multiple Sclerosis is part of the diagnostic assessment for this disease and gives information on its current severity. This laborious process is still carried out in a manual or semiautomatic fashion by clinicians because published automatic approaches have not been universal enough to be widely employed in clinical practice. Thus Multiple Sclerosis lesion segmentation remains an open problem. In this paper we present a new unsupervised approach addressing this problem with dictionary learning and sparse coding methods. We show its general applicability to the problem of lesion segmentation by evaluating our approach on synthetic and clinical image data and comparing it to state-of-the-art methods. Furthermore the potential of using dictionary learning and sparse coding for such segmentation tasks is investigated and various possibilities for further experiments are discussed.
MixGF: spectral probabilities for mixture spectra from more than one peptide.
Wang, Jian; Bourne, Philip E; Bandeira, Nuno
2014-12-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30-390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
MixGF: Spectral Probabilities for Mixture Spectra from more than One Peptide*
Wang, Jian; Bourne, Philip E.; Bandeira, Nuno
2014-01-01
In large-scale proteomic experiments, multiple peptide precursors are often cofragmented simultaneously in the same mixture tandem mass (MS/MS) spectrum. These spectra tend to elude current computational tools because of the ubiquitous assumption that each spectrum is generated from only one peptide. Therefore, tools that consider multiple peptide matches to each MS/MS spectrum can potentially improve the relatively low spectrum identification rate often observed in proteomics experiments. More importantly, data independent acquisition protocols promoting the cofragmentation of multiple precursors are emerging as alternative methods that can greatly improve the throughput of peptide identifications but their success also depends on the availability of algorithms to identify multiple peptides from each MS/MS spectrum. Here we address a fundamental question in the identification of mixture MS/MS spectra: determining the statistical significance of multiple peptides matched to a given MS/MS spectrum. We propose the MixGF generating function model to rigorously compute the statistical significance of peptide identifications for mixture spectra and show that this approach improves the sensitivity of current mixture spectra database search tools by a ≈30–390%. Analysis of multiple data sets with MixGF reveals that in complex biological samples the number of identified mixture spectra can be as high as 20% of all the identified spectra and the number of unique peptides identified only in mixture spectra can be up to 35.4% of those identified in single-peptide spectra. PMID:25225354
Li, Miao; Li, Jun; Zhou, Yiyu
2015-12-08
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-01-01
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234
QDMR: a quantitative method for identification of differentially methylated regions by entropy
Zhang, Yan; Liu, Hongbo; Lv, Jie; Xiao, Xue; Zhu, Jiang; Liu, Xiaojuan; Su, Jianzhong; Li, Xia; Wu, Qiong; Wang, Fang; Cui, Ying
2011-01-01
DNA methylation plays critical roles in transcriptional regulation and chromatin remodeling. Differentially methylated regions (DMRs) have important implications for development, aging and diseases. Therefore, genome-wide mapping of DMRs across various temporal and spatial methylomes is important in revealing the impact of epigenetic modifications on heritable phenotypic variation. We present a quantitative approach, quantitative differentially methylated regions (QDMRs), to quantify methylation difference and identify DMRs from genome-wide methylation profiles by adapting Shannon entropy. QDMR was applied to synthetic methylation patterns and methylation profiles detected by methylated DNA immunoprecipitation microarray (MeDIP-chip) in human tissues/cells. This approach can give a reasonable quantitative measure of methylation difference across multiple samples. Then DMR threshold was determined from methylation probability model. Using this threshold, QDMR identified 10 651 tissue DMRs which are related to the genes enriched for cell differentiation, including 4740 DMRs not identified by the method developed by Rakyan et al. QDMR can also measure the sample specificity of each DMR. Finally, the application to methylation profiles detected by reduced representation bisulphite sequencing (RRBS) in mouse showed the platform-free and species-free nature of QDMR. This approach provides an effective tool for the high-throughput identification of potential functional regions involved in epigenetic regulation. PMID:21306990
A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set
Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong
2012-01-01
Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Lindgren, Eric A.
2018-04-01
This paper expands on the objective and motivation for NDE-based characterization and includes a discussion of the current approach using model-assisted inversion being pursued within the Air Force Research Laboratory (AFRL). This includes a discussion of the multiple model-based methods that can be used, including physics-based models, deep machine learning, and heuristic approaches. The benefits and drawbacks of each method is reviewed and the potential to integrate multiple methods is discussed. Initial successes are included to highlight the ability to obtain quantitative values of damage. Additional steps remaining to realize this capability with statistical metrics of accuracy are discussed, and how these results can be used to enable probabilistic life management are addressed. The outcome of this initiative will realize the long-term desired capability of NDE methods to provide quantitative characterization to accelerate certification of new materials and enhance life management of engineered systems.
Aguado Loi, Claudia X; Alfonso, Moya L; Chan, Isabella; Anderson, Kelsey; Tyson, Dinorah Dina Martinez; Gonzales, Junius; Corvin, Jaime
2017-08-01
The purpose of this paper is to share lessons learned from a collaborative, community-informed mixed-methods approach to adapting an evidence-based intervention to meet the needs of Latinos with chronic disease and minor depression and their family members. Mixed-methods informed by community-based participatory research (CBPR) were employed to triangulate multiple stakeholders' perceptions of facilitators and barriers of implementing the adapted intervention in community settings. Community partners provided an insider perspective to overcome methodological challenges. The study's community informed mixed-methods: research approach offered advantages to a single research methodology by expanding or confirming research findings and engaging multiple stakeholders in data collection. This approach also allowed community partners to collaborate with academic partners in key research decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Boyd, Philip W; Collins, Sinead; Dupont, Sam; Fabricius, Katharina; Gattuso, Jean-Pierre; Havenhand, Jonathan; Hutchins, David A; Riebesell, Ulf; Rintoul, Max S; Vichi, Marcello; Biswas, Haimanti; Ciotti, Aurea; Gao, Kunshan; Gehlen, Marion; Hurd, Catriona L; Kurihara, Haruko; McGraw, Christina M; Navarro, Jorge M; Nilsson, Göran E; Passow, Uta; Pörtner, Hans-Otto
2018-06-01
Marine life is controlled by multiple physical and chemical drivers and by diverse ecological processes. Many of these oceanic properties are being altered by climate change and other anthropogenic pressures. Hence, identifying the influences of multifaceted ocean change, from local to global scales, is a complex task. To guide policy-making and make projections of the future of the marine biosphere, it is essential to understand biological responses at physiological, evolutionary and ecological levels. Here, we contrast and compare different approaches to multiple driver experiments that aim to elucidate biological responses to a complex matrix of ocean global change. We present the benefits and the challenges of each approach with a focus on marine research, and guidelines to navigate through these different categories to help identify strategies that might best address research questions in fundamental physiology, experimental evolutionary biology and community ecology. Our review reveals that the field of multiple driver research is being pulled in complementary directions: the need for reductionist approaches to obtain process-oriented, mechanistic understanding and a requirement to quantify responses to projected future scenarios of ocean change. We conclude the review with recommendations on how best to align different experimental approaches to contribute fundamental information needed for science-based policy formulation. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zhao, Weichen; Sun, Zhuo; Kong, Song
2016-10-01
Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.
ERIC Educational Resources Information Center
Calfee, Robert; Sperling, Melanie
2010-01-01
This book examines the use of mixed methods for conducting language and literacy research, defining how and why this approach is successful for solving problems and clarifying issues that researchers encounter. Using research findings, the authors explore how an intermingling of multiple methods expands the possibilities of observation and…
Improving RNA-Seq expression estimation by modeling isoform- and exon-specific read sequencing rate.
Liu, Xuejun; Shi, Xinxin; Chen, Chunlin; Zhang, Li
2015-10-16
The high-throughput sequencing technology, RNA-Seq, has been widely used to quantify gene and isoform expression in the study of transcriptome in recent years. Accurate expression measurement from the millions or billions of short generated reads is obstructed by difficulties. One is ambiguous mapping of reads to reference transcriptome caused by alternative splicing. This increases the uncertainty in estimating isoform expression. The other is non-uniformity of read distribution along the reference transcriptome due to positional, sequencing, mappability and other undiscovered sources of biases. This violates the uniform assumption of read distribution for many expression calculation approaches, such as the direct RPKM calculation and Poisson-based models. Many methods have been proposed to address these difficulties. Some approaches employ latent variable models to discover the underlying pattern of read sequencing. However, most of these methods make bias correction based on surrounding sequence contents and share the bias models by all genes. They therefore cannot estimate gene- and isoform-specific biases as revealed by recent studies. We propose a latent variable model, NLDMseq, to estimate gene and isoform expression. Our method adopts latent variables to model the unknown isoforms, from which reads originate, and the underlying percentage of multiple spliced variants. The isoform- and exon-specific read sequencing biases are modeled to account for the non-uniformity of read distribution, and are identified by utilizing the replicate information of multiple lanes of a single library run. We employ simulation and real data to verify the performance of our method in terms of accuracy in the calculation of gene and isoform expression. Results show that NLDMseq obtains competitive gene and isoform expression compared to popular alternatives. Finally, the proposed method is applied to the detection of differential expression (DE) to show its usefulness in the downstream analysis. The proposed NLDMseq method provides an approach to accurately estimate gene and isoform expression from RNA-Seq data by modeling the isoform- and exon-specific read sequencing biases. It makes use of a latent variable model to discover the hidden pattern of read sequencing. We have shown that it works well in both simulations and real datasets, and has competitive performance compared to popular methods. The method has been implemented as a freely available software which can be found at https://github.com/PUGEA/NLDMseq.
A full-potential approach to the relativistic single-site Green's function
Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...
2016-07-07
One major purpose of studying the single-site scattering problem is to obtain the scattering matrices and differential equation solutions indispensable to multiple scattering theory (MST) calculations. On the other hand, the single-site scattering itself is also appealing because it reveals the physical environment experienced by electrons around the scattering center. In this study, we demonstrate a new formalism to calculate the relativistic full-potential single-site Green's function. We implement this method to calculate the single-site density of states and electron charge densities. Lastly, the code is rigorously tested and with the help of Krein's theorem, the relativistic effects and full potentialmore » effects in group V elements and noble metals are thoroughly investigated.« less
Remote sensing of tropospheric turbulence using GPS radio occultation
NASA Astrophysics Data System (ADS)
Shume, Esayas; Ao, Chi
2016-07-01
Radio occultation (RO) measurements are sensitive to the small-scale irregularities in the atmosphere. In this study, we present a new technique to estimate tropospheric turbulence strength (namely, scintillation index) by analyzing RO amplitude fluctuations in impact parameter domain. GPS RO observations from the COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) satellites enabled us to calculate global maps of scintillation measures, revealing the seasonal, latitudinal, and longitudinal characteristics of the turbulent troposphere. Such information are both difficult and expensive to obtain especially over the oceans. To verify our approach, simulation experiments using the multiple phase screen (MPS) method were conducted. The results show that scintillation indices inferred from the MPS simulations are in good agreement with scintillation measures estimated from COSMIC observations.
Integrated Metrics for Improving the Life Cycle Approach to Assessing Product System Sustainability
Life cycle approaches are critical for identifying and managing to reduce burdens in the sustainability of product systems. While these methods can indicate potential environmental impacts of a product, current Life Cycle Assessment (LCA) methods fail to integrate the multiple im...
Simultaneous Two-Way Clustering of Multiple Correspondence Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Dillon, William R.
2010-01-01
A 2-way clustering approach to multiple correspondence analysis is proposed to account for cluster-level heterogeneity of both respondents and variable categories in multivariate categorical data. Specifically, in the proposed method, multiple correspondence analysis is combined with k-means in a unified framework in which "k"-means is…
NASA Astrophysics Data System (ADS)
Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc
2011-06-01
This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
A Methodological Self-Study of Quantitizing: Negotiating Meaning and Revealing Multiplicity
ERIC Educational Resources Information Center
Seltzer-Kelly, Deborah; Westwood, Sean J.; Pena-Guzman, David M.
2012-01-01
This inquiry developed during the process of "quantitizing" qualitative data the authors had gathered for a mixed methods curriculum efficacy study. Rather than providing the intended rigor to their data coding process, their use of an intercoder reliability metric prompted their investigation of the multiplicity and messiness that, as they…
2009-01-01
Many studies of RNA folding and catalysis have revealed conformational heterogeneity, metastable folding intermediates, and long-lived states with distinct catalytic activities. We have developed a single-molecule imaging approach for investigating the functional heterogeneity of in vitro-evolved RNA aptamers. Monitoring the association of fluorescently labeled ligands with individual RNA aptamer molecules has allowed us to record binding events over the course of multiple days, thus providing sufficient statistics to quantitatively define the kinetic properties at the single-molecule level. The ligand binding kinetics of the highly optimized RNA aptamer studied here displays a remarkable degree of uniformity and lack of memory. Such homogeneous behavior is quite different from the heterogeneity seen in previous single-molecule studies of naturally derived RNA and protein enzymes. The single-molecule methods we describe may be of use in analyzing the distribution of functional molecules in heterogeneous evolving populations or even in unselected samples of random sequences. PMID:19572753
Is it all in the game? Flow experience and scientific practices during an INPLACE mobile game
NASA Astrophysics Data System (ADS)
Bressler, Denise M.
Mobile science learning games show promise for promoting scientific practices and high engagement. Researchers have quantified this engagement according to flow theory. Using an embedded mixed methods design, this study investigated whether an INPLACE mobile game promotes flow experience, scientific practices, and effective team collaboration. Students playing the game (n=59) were compared with students in a business-as-usual control activity (n=120). Using an open-ended instrument designed to measure scientific practices and a self-report flow survey, this study empirically assessed flow and learner's scientific practices. The game players had significantly higher levels of flow and scientific practices. Using a multiple case study approach, collaboration among game teams (n=3 teams) were qualitatively compared with control teams (n=3 teams). Game teams revealed not only higher levels of scientific practices but also higher levels of engaged responses and communal language. Control teams revealed lower levels of scientific practice along with higher levels of rejecting responses and command language. Implications for these findings are discussed.
Thoury, M.; Mille, B.; Séverin-Fabiani, T.; Robbiola, L.; Réfrégiers, M.; Jarrige, J-F; Bertrand, L.
2016-01-01
Photoluminescence spectroscopy is a key method to monitor defects in semiconductors from nanophotonics to solar cell systems. Paradoxically, its great sensitivity to small variations of local environment becomes a handicap for heterogeneous systems, such as are encountered in environmental, medical, ancient materials sciences and engineering. Here we demonstrate that a novel full-field photoluminescence imaging approach allows accessing the spatial distribution of crystal defect fluctuations at the crystallite level across centimetre-wide fields of view. This capacity is illustrated in archaeology and material sciences. The coexistence of two hitherto indistinguishable non-stoichiometric cuprous oxide phases is revealed in a 6,000-year-old amulet from Mehrgarh (Baluchistan, Pakistan), identified as the oldest known artefact made by lost-wax casting and providing a better understanding of this fundamental invention. Low-concentration crystal defect fluctuations are readily mapped within ZnO nanowires. High spatial dynamics-photoluminescence imaging holds great promise for the characterization of bulk heterogeneous systems across multiple disciplines. PMID:27843139
NASA Astrophysics Data System (ADS)
Thoury, M.; Mille, B.; Séverin-Fabiani, T.; Robbiola, L.; Réfrégiers, M.; Jarrige, J.-F.; Bertrand, L.
2016-11-01
Photoluminescence spectroscopy is a key method to monitor defects in semiconductors from nanophotonics to solar cell systems. Paradoxically, its great sensitivity to small variations of local environment becomes a handicap for heterogeneous systems, such as are encountered in environmental, medical, ancient materials sciences and engineering. Here we demonstrate that a novel full-field photoluminescence imaging approach allows accessing the spatial distribution of crystal defect fluctuations at the crystallite level across centimetre-wide fields of view. This capacity is illustrated in archaeology and material sciences. The coexistence of two hitherto indistinguishable non-stoichiometric cuprous oxide phases is revealed in a 6,000-year-old amulet from Mehrgarh (Baluchistan, Pakistan), identified as the oldest known artefact made by lost-wax casting and providing a better understanding of this fundamental invention. Low-concentration crystal defect fluctuations are readily mapped within ZnO nanowires. High spatial dynamics-photoluminescence imaging holds great promise for the characterization of bulk heterogeneous systems across multiple disciplines.
Thoury, M; Mille, B; Séverin-Fabiani, T; Robbiola, L; Réfrégiers, M; Jarrige, J-F; Bertrand, L
2016-11-15
Photoluminescence spectroscopy is a key method to monitor defects in semiconductors from nanophotonics to solar cell systems. Paradoxically, its great sensitivity to small variations of local environment becomes a handicap for heterogeneous systems, such as are encountered in environmental, medical, ancient materials sciences and engineering. Here we demonstrate that a novel full-field photoluminescence imaging approach allows accessing the spatial distribution of crystal defect fluctuations at the crystallite level across centimetre-wide fields of view. This capacity is illustrated in archaeology and material sciences. The coexistence of two hitherto indistinguishable non-stoichiometric cuprous oxide phases is revealed in a 6,000-year-old amulet from Mehrgarh (Baluchistan, Pakistan), identified as the oldest known artefact made by lost-wax casting and providing a better understanding of this fundamental invention. Low-concentration crystal defect fluctuations are readily mapped within ZnO nanowires. High spatial dynamics-photoluminescence imaging holds great promise for the characterization of bulk heterogeneous systems across multiple disciplines.
Alcohol-abuse drug disulfiram targets cancer via p97 segregase adapter NPL4
Skrott, Zdenek; Mistrik, Martin; Andersen, Klaus Kaae; Friis, Søren; Majera, Dusana; Gursky, Jan; Ozdian, Tomas; Bartkova, Jirina; Turi, Zsofia; Moudry, Pavel; Kraus, Marianne; Michalova, Martina; Vaclavkova, Jana; Dzubak, Petr; Vrobel, Ivo; Pouckova, Pavla; Sedlacek, Jindrich; Miklovicova, Andrea; Kutt, Anne; Li, Jing; Mattova, Jana; Driessen, Christoph; Dou, Q. Ping; Olsen, Jørgen; Hajduch, Marian; Cvek, Boris; Deshaies, Raymond J.; Bartek, Jiri
2017-01-01
Cancer incidence is rising and this global challenge is further exacerbated by tumour resistance to available medicines. A promising approach to such unmet need for improved cancer treatment is drug repurposing. Here we highlight the potential for repurposing disulfiram (Antabuse), an old alcohol-aversion drug effective against diverse cancer types in preclinical studies. Our nationwide epidemiological study reveals that patients who continuously used disulfiram have a lower risk of death from cancer compared to those who stopped using the drug at their diagnosis. Moreover, we identify ditiocarb-copper complex as the metabolite of disulfiram responsible for anticancer effects, and provide methods to detect its preferential accumulation in tumours and candidate biomarkers for impact in cells and tissues. Finally, our functional and biophysical analyses reveal the long-sought molecular target of disulfiram’s tumour suppressing effects as NPL4, an adapter of p97/VCP segregase essential for protein turnover involved in multiple regulatory and stress-response cellular pathways. PMID:29211715
Grains of connectivity: analysis at multiple spatial scales in landscape genetics.
Galpern, Paul; Manseau, Micheline; Wilson, Paul
2012-08-01
Landscape genetic analyses are typically conducted at one spatial scale. Considering multiple scales may be essential for identifying landscape features influencing gene flow. We examined landscape connectivity for woodland caribou (Rangifer tarandus caribou) at multiple spatial scales using a new approach based on landscape graphs that creates a Voronoi tessellation of the landscape. To illustrate the potential of the method, we generated five resistance surfaces to explain how landscape pattern may influence gene flow across the range of this population. We tested each resistance surface using a raster at the spatial grain of available landscape data (200 m grid squares). We then used our method to produce up to 127 additional grains for each resistance surface. We applied a causal modelling framework with partial Mantel tests, where evidence of landscape resistance is tested against an alternative hypothesis of isolation-by-distance, and found statistically significant support for landscape resistance to gene flow in 89 of the 507 spatial grains examined. We found evidence that major roads as well as the cumulative effects of natural and anthropogenic disturbance may be contributing to the genetic structure. Using only the original grid surface yielded no evidence for landscape resistance to gene flow. Our results show that using multiple spatial grains can reveal landscape influences on genetic structure that may be overlooked with a single grain, and suggest that coarsening the grain of landcover data may be appropriate for highly mobile species. We discuss how grains of connectivity and related analyses have potential landscape genetic applications in a broad range of systems. © 2012 Blackwell Publishing Ltd.
Yu, Hwa-Lung; Lin, Yuan-Chien; Kuo, Yi-Ming
2015-09-01
Understanding the temporal dynamics and interactions of particulate matter (PM) concentration and composition is important for air quality control. This paper applied a dynamic factor analysis method (DFA) to reveal the underlying mechanisms of nonstationary variations in twelve ambient concentrations of aerosols and gaseous pollutants, and the associations with meteorological factors. This approach can consider the uncertainties and temporal dependences of time series data. The common trends of the yearlong and three selected diurnal variations were obtained to characterize the dominant processes occurring in general and specific scenarios in Taipei during 2009 (i.e., during Asian dust storm (ADS) events, rainfall, and under normal conditions). The results revealed the two distinct yearlong NOx transformation processes, and demonstrated that traffic emissions and photochemical reactions both critically influence diurnal variation, depending upon meteorological conditions. During an ADS event, transboundary transport and distinct weather conditions both influenced the temporal pattern of identified common trends. This study shows the DFA method can effectively extract meaningful latent processes of time series data and provide insights of the dominant associations and interactions in the complex air pollution processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Paparini, Andrea; Yang, Rongchang; Chen, Linda; Tong, Kaising; Gibson-Kueh, Susan; Lymbery, Alan; Ryan, Una M
2017-11-01
Currently, the systematics, biology and epidemiology of piscine Cryptosporidium species are poorly understood. Here, we compared Sanger ‒ and next-generation ‒ sequencing (NGS), of piscine Cryptosporidium, at the 18S rRNA and actin genes. The hosts comprised 11 ornamental fish species, spanning four orders and eight families. The objectives were: to (i) confirm the rich genetic diversity of the parasite and the high frequency of mixed infections; and (ii) explore the potential of NGS in the presence of complex genetic mixtures. By Sanger sequencing, four main genotypes were obtained at the actin locus, while for the 18S locus, seven genotypes were identified. At both loci, NGS revealed frequent mixed infections, consisting of one highly dominant variant plus substantially rarer genotypes. Both sequencing methods detected novel Cryptosporidium genotypes at both loci, including a novel and highly abundant actin genotype that was identified by both Sanger sequencing and NGS. Importantly, this genotype accounted for 68·9% of all NGS reads from all samples (249 585/362 372). The present study confirms that aquarium fish can harbour a large and unexplored Cryptosporidium genetic diversity. Although commonly used in molecular parasitology studies, nested PCR prevents quantitative comparisons and thwarts the advantages of NGS, when this latter approach is used to investigate multiple infections.
Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J
2017-01-01
Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.
Zamarreno-Ramos, C; Linares-Barranco, A; Serrano-Gotarredona, T; Linares-Barranco, B
2013-02-01
This paper presents a modular, scalable approach to assembling hierarchically structured neuromorphic Address Event Representation (AER) systems. The method consists of arranging modules in a 2D mesh, each communicating bidirectionally with all four neighbors. Address events include a module label. Each module includes an AER router which decides how to route address events. Two routing approaches have been proposed, analyzed and tested, using either destination or source module labels. Our analyses reveal that depending on traffic conditions and network topologies either one or the other approach may result in better performance. Experimental results are given after testing the approach using high-end Virtex-6 FPGAs. The approach is proposed for both single and multiple FPGAs, in which case a special bidirectional parallel-serial AER link with flow control is exploited, using the FPGA Rocket-I/O interfaces. Extensive test results are provided exploiting convolution modules of 64 × 64 pixels with kernels with sizes up to 11 × 11, which process real sensory data from a Dynamic Vision Sensor (DVS) retina. One single Virtex-6 FPGA can hold up to 64 of these convolution modules, which is equivalent to a neural network with 262 × 10(3) neurons and almost 32 million synapses.
Speicher, Nora K; Pfeifer, Nico
2015-06-15
Despite ongoing cancer research, available therapies are still limited in quantity and effectiveness, and making treatment decisions for individual patients remains a hard problem. Established subtypes, which help guide these decisions, are mainly based on individual data types. However, the analysis of multidimensional patient data involving the measurements of various molecular features could reveal intrinsic characteristics of the tumor. Large-scale projects accumulate this kind of data for various cancer types, but we still lack the computational methods to reliably integrate this information in a meaningful manner. Therefore, we apply and extend current multiple kernel learning for dimensionality reduction approaches. On the one hand, we add a regularization term to avoid overfitting during the optimization procedure, and on the other hand, we show that one can even use several kernels per data type and thereby alleviate the user from having to choose the best kernel functions and kernel parameters for each data type beforehand. We have identified biologically meaningful subgroups for five different cancer types. Survival analysis has revealed significant differences between the survival times of the identified subtypes, with P values comparable or even better than state-of-the-art methods. Moreover, our resulting subtypes reflect combined patterns from the different data sources, and we demonstrate that input kernel matrices with only little information have less impact on the integrated kernel matrix. Our subtypes show different responses to specific therapies, which could eventually assist in treatment decision making. An executable is available upon request. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Greenberg, Ariela Caren
Differential item functioning (DIF) and differential distractor functioning (DDF) are methods used to screen for item bias (Camilli & Shepard, 1994; Penfield, 2008). Using an applied empirical example, this mixed-methods study examined the congruency and relationship of DIF and DDF methods in screening multiple-choice items. Data for Study I were drawn from item responses of 271 female and 236 male low-income children on a preschool science assessment. Item analyses employed a common statistical approach of the Mantel-Haenszel log-odds ratio (MH-LOR) to detect DIF in dichotomously scored items (Holland & Thayer, 1988), and extended the approach to identify DDF (Penfield, 2008). Findings demonstrated that the using MH-LOR to detect DIF and DDF supported the theoretical relationship that the magnitude and form of DIF and are dependent on the DDF effects, and demonstrated the advantages of studying DIF and DDF in multiple-choice items. A total of 4 items with DIF and DDF and 5 items with only DDF were detected. Study II incorporated an item content review, an important but often overlooked and under-published step of DIF and DDF studies (Camilli & Shepard). Interviews with 25 female and 22 male low-income preschool children and an expert review helped to interpret the DIF and DDF results and their comparison, and determined that a content review process of studied items can reveal reasons for potential item bias that are often congruent with the statistical results. Patterns emerged and are discussed in detail. The quantitative and qualitative analyses were conducted in an applied framework of examining the validity of the preschool science assessment scores for evaluating science programs serving low-income children, however, the techniques can be generalized for use with measures across various disciplines of research.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images
NASA Astrophysics Data System (ADS)
Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian
2017-01-01
There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.
FISHtrees 3.0: Tumor Phylogenetics Using a Ploidy Probe.
Gertz, E Michael; Chowdhury, Salim Akhter; Lee, Woei-Jyh; Wangsa, Darawalee; Heselmeyer-Haddad, Kerstin; Ried, Thomas; Schwartz, Russell; Schäffer, Alejandro A
2016-01-01
Advances in fluorescence in situ hybridization (FISH) make it feasible to detect multiple copy-number changes in hundreds of cells of solid tumors. Studies using FISH, sequencing, and other technologies have revealed substantial intra-tumor heterogeneity. The evolution of subclones in tumors may be modeled by phylogenies. Tumors often harbor aneuploid or polyploid cell populations. Using a FISH probe to estimate changes in ploidy can guide the creation of trees that model changes in ploidy and individual gene copy-number variations. We present FISHtrees 3.0, which implements a ploidy-based tree building method based on mixed integer linear programming (MILP). The ploidy-based modeling in FISHtrees includes a new formulation of the problem of merging trees for changes of a single gene into trees modeling changes in multiple genes and the ploidy. When multiple samples are collected from each patient, varying over time or tumor regions, it is useful to evaluate similarities in tumor progression among the samples. Therefore, we further implemented in FISHtrees 3.0 a new method to build consensus graphs for multiple samples. We validate FISHtrees 3.0 on a simulated data and on FISH data from paired cases of cervical primary and metastatic tumors and on paired breast ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC). Tests on simulated data show improved accuracy of the ploidy-based approach relative to prior ploidyless methods. Tests on real data further demonstrate novel insights these methods offer into tumor progression processes. Trees for DCIS samples are significantly less complex than trees for paired IDC samples. Consensus graphs show substantial divergence among most paired samples from both sets. Low consensus between DCIS and IDC trees may help explain the difficulty in finding biomarkers that predict which DCIS cases are at most risk to progress to IDC. The FISHtrees software is available at ftp://ftp.ncbi.nih.gov/pub/FISHtrees.
FISHtrees 3.0: Tumor Phylogenetics Using a Ploidy Probe
Chowdhury, Salim Akhter; Lee, Woei-Jyh; Wangsa, Darawalee; Heselmeyer-Haddad, Kerstin; Ried, Thomas; Schwartz, Russell; Schäffer, Alejandro A.
2016-01-01
Advances in fluorescence in situ hybridization (FISH) make it feasible to detect multiple copy-number changes in hundreds of cells of solid tumors. Studies using FISH, sequencing, and other technologies have revealed substantial intra-tumor heterogeneity. The evolution of subclones in tumors may be modeled by phylogenies. Tumors often harbor aneuploid or polyploid cell populations. Using a FISH probe to estimate changes in ploidy can guide the creation of trees that model changes in ploidy and individual gene copy-number variations. We present FISHtrees 3.0, which implements a ploidy-based tree building method based on mixed integer linear programming (MILP). The ploidy-based modeling in FISHtrees includes a new formulation of the problem of merging trees for changes of a single gene into trees modeling changes in multiple genes and the ploidy. When multiple samples are collected from each patient, varying over time or tumor regions, it is useful to evaluate similarities in tumor progression among the samples. Therefore, we further implemented in FISHtrees 3.0 a new method to build consensus graphs for multiple samples. We validate FISHtrees 3.0 on a simulated data and on FISH data from paired cases of cervical primary and metastatic tumors and on paired breast ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC). Tests on simulated data show improved accuracy of the ploidy-based approach relative to prior ploidyless methods. Tests on real data further demonstrate novel insights these methods offer into tumor progression processes. Trees for DCIS samples are significantly less complex than trees for paired IDC samples. Consensus graphs show substantial divergence among most paired samples from both sets. Low consensus between DCIS and IDC trees may help explain the difficulty in finding biomarkers that predict which DCIS cases are at most risk to progress to IDC. The FISHtrees software is available at ftp://ftp.ncbi.nih.gov/pub/FISHtrees. PMID:27362268
Interactions dominate the dynamics of visual cognition.
Stephen, Damian G; Mirman, Daniel
2010-04-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. Copyright 2009 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Credille, Jennifer; Owens, Elizabeth
This capstone offers the introduction of Lean concepts to an office activity to demonstrate the versatility of Lean. Traditionally Lean has been associated with process improvements as applied to an industrial atmosphere. However, this paper will demonstrate that implementing Lean concepts within an office activity can result in significant process improvements. Lean first emerged with the conception of the Toyota Production System. This innovative concept was designed to improve productivity in the automotive industry by eliminating waste and variation. Lean has also been applied to office environments, however the limited literature reveals most Lean techniques within an office are restrictedmore » to one or two techniques. Our capstone confronts these restrictions by introducing a systematic approach that utilizes multiple Lean concepts. The approach incorporates: system analysis, system reliability, system requirements, and system feasibility. The methodical Lean outline provides tools for a successful outcome, which ensures the process is thoroughly dissected and can be achieved for any process in any work environment.« less
Defining Success in Adult Basic Education Settings: Multiple Stakeholders, Multiple Perspectives
Tighe, Elizabeth L.; Barnes, Adrienne E.; Connor, Carol M.; Steadman, Sharilyn C.
2015-01-01
This study employed quantitative and qualitative research approaches to investigate what constitutes “success” in Adult Basic Education (ABE) programs from the perspectives of multiple educational stakeholders: the state funding agency, the teachers, and the students. Success was defined in multiple ways. In the quantitative section of the study, we computed classroom value-added scores (used as a metric of the state’s definition of success) to identify more and less effective ABE classrooms in two Florida counties. In the qualitative section of the study, we observed and conducted interviews with teachers and students in the selected classrooms to investigate how these stakeholders defined success in ABE. Iterative consideration of the qualitative data revealed three principal markers of success: (a) instructional strategies and teacher-student interactions; (b) views on standardized testing; and (c) student motivational factors. In general, classrooms with higher value-added scores were characterized by multiple instructional approaches, positive and collaborative teacher-student interactions, and students engaging in goal setting and citing motivational factors such as family and personal fulfillment. The implications for ABE programs are discussed. PMID:26279590
A method for multitask fMRI data fusion applied to schizophrenia.
Calhoun, Vince D; Adali, Tulay; Kiehl, Kent A; Astur, Robert; Pekar, James J; Pearlson, Godfrey D
2006-07-01
It is becoming common to collect data from multiple functional magnetic resonance imaging (fMRI) paradigms on a single individual. The data from these experiments are typically analyzed separately and sometimes directly subtracted from one another on a voxel-by-voxel basis. These comparative approaches, although useful, do not directly attempt to examine potential commonalities between tasks and between voxels. To remedy this we propose a method to extract maximally spatially independent maps for each task that are "coupled" together by a shared loading parameter. We first compute an activation map for each task and each individual as "features," which are then used to perform joint independent component analysis (jICA) on the group data. We demonstrate our approach on a data set derived from healthy controls and schizophrenia patients, each of which carried out an auditory oddball task and a Sternberg working memory task. Our analysis approach revealed two interesting findings in the data that were missed with traditional analyses. First, consistent with our hypotheses, schizophrenia patients demonstrate "decreased" connectivity in a joint network including portions of regions implicated in two prevalent models of schizophrenia. A second finding is that for the voxels identified by the jICA analysis, the correlation between the two tasks was significantly higher in patients than in controls. This finding suggests that schizophrenia patients activate "more similarly" for both tasks than do controls. A possible synthesis of both findings is that patients are activating less, but also activating with a less-unique set of regions for these very different tasks. Both of the findings described support the claim that examination of joint activation across multiple tasks can enable new questions to be posed about fMRI data. Our approach can also be applied to data using more than two tasks. It thus provides a way to integrate and probe brain networks using a variety of tasks and may increase our understanding of coordinated brain networks and the impact of pathology upon them. 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Price, Stanton R.; Murray, Bryce; Hu, Lequn; Anderson, Derek T.; Havens, Timothy C.; Luke, Robert H.; Keller, James M.
2016-05-01
A serious threat to civilians and soldiers is buried and above ground explosive hazards. The automatic detection of such threats is highly desired. Many methods exist for explosive hazard detection, e.g., hand-held based sensors, downward and forward looking vehicle mounted platforms, etc. In addition, multiple sensors are used to tackle this extreme problem, such as radar and infrared (IR) imagery. In this article, we explore the utility of feature and decision level fusion of learned features for forward looking explosive hazard detection in IR imagery. Specifically, we investigate different ways to fuse learned iECO features pre and post multiple kernel (MK) support vector machine (SVM) based classification. Three MK strategies are explored; fixed rule, heuristics and optimization-based. Performance is assessed in the context of receiver operating characteristic (ROC) curves on data from a U.S. Army test site that contains multiple target and clutter types, burial depths and times of day. Specifically, the results reveal two interesting things. First, the different MK strategies appear to indicate that the different iECO individuals are all more-or-less important and there is not a dominant feature. This is reinforcing as our hypothesis was that iECO provides different ways to approach target detection. Last, we observe that while optimization-based MK is mathematically appealing, i.e., it connects the learning of the fusion to the underlying classification problem we are trying to solve, it appears to be highly susceptible to over fitting and simpler, e.g., fixed rule and heuristics approaches help us realize more generalizable iECO solutions.
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2016-01-01
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.
Multiple imputation methods for bivariate outcomes in cluster randomised trials.
DiazOrdaz, K; Kenward, M G; Gomes, M; Grieve, R
2016-09-10
Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single-level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost-effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing-at-random clustered data scenarios were simulated following a full-factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed-effects multiple imputation and too low following single-level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Quantitative Proteomics via High Resolution MS Quantification: Capabilities and Limitations
Higgs, Richard E.; Butler, Jon P.; Han, Bomie; Knierman, Michael D.
2013-01-01
Recent improvements in the mass accuracy and resolution of mass spectrometers have led to renewed interest in label-free quantification using data from the primary mass spectrum (MS1) acquired from data-dependent proteomics experiments. The capacity for higher specificity quantification of peptides from samples enriched for proteins of biological interest offers distinct advantages for hypothesis generating experiments relative to immunoassay detection methods or prespecified peptide ions measured by multiple reaction monitoring (MRM) approaches. Here we describe an evaluation of different methods to post-process peptide level quantification information to support protein level inference. We characterize the methods by examining their ability to recover a known dilution of a standard protein in background matrices of varying complexity. Additionally, the MS1 quantification results are compared to a standard, targeted, MRM approach on the same samples under equivalent instrument conditions. We show the existence of multiple peptides with MS1 quantification sensitivity similar to the best MRM peptides for each of the background matrices studied. Based on these results we provide recommendations on preferred approaches to leveraging quantitative measurements of multiple peptides to improve protein level inference. PMID:23710359
A multiple scales approach to maximal superintegrability
NASA Astrophysics Data System (ADS)
Gubbiotti, G.; Latini, D.
2018-07-01
In this paper we present a simple, algorithmic test to establish if a Hamiltonian system is maximally superintegrable or not. This test is based on a very simple corollary of a theorem due to Nekhoroshev and on a perturbative technique called the multiple scales method. If the outcome is positive, this test can be used to suggest maximal superintegrability, whereas when the outcome is negative it can be used to disprove it. This method can be regarded as a finite dimensional analog of the multiple scales method as a way to produce soliton equations. We use this technique to show that the real counterpart of a mechanical system found by Jules Drach in 1935 is, in general, not maximally superintegrable. We give some hints on how this approach could be applied to classify maximally superintegrable systems by presenting a direct proof of the well-known Bertrand’s theorem.
Back on Track: Approaches to Managing Highly Disruptive School Classes
ERIC Educational Resources Information Center
Vaaland, Grete S.
2017-01-01
Teaching and learning are at stake when classrooms become highly disruptive and pupils ignore the teacher's instructions and leadership. Re-establishing teacher authority in a highly disruptive school class is an understudied area. This instrumental multiple case study aimed to reveal concepts and conceptual frameworks that are suitable for…
Macher, Hada C; Martinez-Broca, Maria A; Rubio-Calvo, Amalia; Leon-Garcia, Cristina; Conde-Sanchez, Manuel; Costa, Alzenira; Navarro, Elena; Guerrero, Juan M
2012-01-01
The multiple endocrine neoplasia type 2A (MEN2A) is a monogenic disorder characterized by an autosomal dominant pattern of inheritance which is characterized by high risk of medullary thyroid carcinoma in all mutation carriers. Although this disorder is classified as a rare disease, the patients affected have a low life quality and a very expensive and continuous treatment. At present, MEN2A is diagnosed by gene sequencing after birth, thus trying to start an early treatment and by reduction of morbidity and mortality. We first evaluated the presence of MEN2A mutation (C634Y) in serum of 25 patients, previously diagnosed by sequencing in peripheral blood leucocytes, using HRM genotyping analysis. In a second step, we used a COLD-PCR approach followed by HRM genotyping analysis for non-invasive prenatal diagnosis of a pregnant woman carrying a fetus with a C634Y mutation. HRM analysis revealed differences in melting curve shapes that correlated with patients diagnosed for MEN2A by gene sequencing analysis with 100% accuracy. Moreover, the pregnant woman carrying the fetus with the C634Y mutation revealed a melting curve shape in agreement with the positive controls in the COLD-PCR study. The mutation was confirmed by sequencing of the COLD-PCR amplification product. In conclusion, we have established a HRM analysis in serum samples as a new primary diagnosis method suitable for the detection of C634Y mutations in MEN2A patients. Simultaneously, we have applied the increase of sensitivity of COLD-PCR assay approach combined with HRM analysis for the non-invasive prenatal diagnosis of C634Y fetal mutations using pregnant women serum.
Du, Yiyang; He, Bosai; Li, Qing; He, Jiao; Wang, Di; Bi, Kaishun
2017-07-01
Suan-Zao-Ren granule is widely used to treat insomnia in China. However, because of the complexity and diversity of the chemical compositions in traditional Chinese medicine formula, the comprehensive analysis of constituents in vitro and in vivo is rather difficult. In our study, an ultra high performance liquid chromatography with quadrupole time-of-flight mass spectrometry and the PeakView® software, which uses multiple data processing approaches including product ion filter, neutral loss filter, and mass defect filter, method was developed to characterize the ingredients and rat serum metabolites in Suan-Zao-Ren granule. A total of 101 constituents were detected in vitro. Under the same analysis conditions, 68 constituents were characterized in rat serum, including 35 prototype components and 33 metabolites. The metabolic pathways of main components were also illustrated. Among them, the metabolic pathways of timosaponin AI were firstly revealed. The bioactive compounds mainly underwent the phase I metabolic pathways including hydroxylation, oxidation, hydrolysis, and phase II metabolic pathways including sulfate conjugation, glucuronide conjugation, cysteine conjugation, acetycysteine conjugation, and glutathione conjugation. In conclusion, our results showed that this analysis approach was extremely useful for the in-depth pharmacological research of Suan-Zao-Ren granule and provided a chemical basis for its rational. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.
Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu
2018-03-10
This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.
Streiner, David L
2015-10-01
Testing many null hypotheses in a single study results in an increased probability of detecting a significant finding just by chance (the problem of multiplicity). Debates have raged over many years with regard to whether to correct for multiplicity and, if so, how it should be done. This article first discusses how multiple tests lead to an inflation of the α level, then explores the following different contexts in which multiplicity arises: testing for baseline differences in various types of studies, having >1 outcome variable, conducting statistical tests that produce >1 P value, taking multiple "peeks" at the data, and unplanned, post hoc analyses (i.e., "data dredging," "fishing expeditions," or "P-hacking"). It then discusses some of the methods that have been proposed for correcting for multiplicity, including single-step procedures (e.g., Bonferroni); multistep procedures, such as those of Holm, Hochberg, and Šidák; false discovery rate control; and resampling approaches. Note that these various approaches describe different aspects and are not necessarily mutually exclusive. For example, resampling methods could be used to control the false discovery rate or the family-wise error rate (as defined later in this article). However, the use of one of these approaches presupposes that we should correct for multiplicity, which is not universally accepted, and the article presents the arguments for and against such "correction." The final section brings together these threads and presents suggestions with regard to when it makes sense to apply the corrections and how to do so. © 2015 American Society for Nutrition.
Lee, Minjung; Dignam, James J.; Han, Junhee
2014-01-01
We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107
Multiple imputation of missing data in nested case-control and case-cohort studies.
Keogh, Ruth H; Seaman, Shaun R; Bartlett, Jonathan W; Wood, Angela M
2018-06-05
The nested case-control and case-cohort designs are two main approaches for carrying out a substudy within a prospective cohort. This article adapts multiple imputation (MI) methods for handling missing covariates in full-cohort studies for nested case-control and case-cohort studies. We consider data missing by design and data missing by chance. MI analyses that make use of full-cohort data and MI analyses based on substudy data only are described, alongside an intermediate approach in which the imputation uses full-cohort data but the analysis uses only the substudy. We describe adaptations to two imputation methods: the approximate method (MI-approx) of White and Royston () and the "substantive model compatible" (MI-SMC) method of Bartlett et al. (). We also apply the "MI matched set" approach of Seaman and Keogh () to nested case-control studies, which does not require any full-cohort information. The methods are investigated using simulation studies and all perform well when their assumptions hold. Substantial gains in efficiency can be made by imputing data missing by design using the full-cohort approach or by imputing data missing by chance in analyses using the substudy only. The intermediate approach brings greater gains in efficiency relative to the substudy approach and is more robust to imputation model misspecification than the full-cohort approach. The methods are illustrated using the ARIC Study cohort. Supplementary Materials provide R and Stata code. © 2018, The International Biometric Society.
Wang, Yonghua; Zheng, Chunli; Huang, Chao; Li, Yan; Chen, Xuetong; Wu, Ziyin; Wang, Zhenzhong; Xiao, Wei; Zhang, Boli
2015-01-01
Holistic medicine is an interdisciplinary field of study that integrates all types of biological information (protein, small molecules, tissues, organs, external environmental signals, etc.) to lead to predictive and actionable models for health care and disease treatment. Despite the global and integrative character of this discipline, a comprehensive picture of holistic medicine for the treatment of complex diseases is still lacking. In this study, we develop a novel systems pharmacology approach to dissect holistic medicine in treating cardiocerebrovascular diseases (CCDs) by TCM (traditional Chinese medicine). Firstly, by applying the TCM active ingredients screened out by a systems-ADME process, we explored and experimentalized the signed drug-target interactions for revealing the pharmacological actions of drugs at a molecule level. Then, at a/an tissue/organ level, the drug therapeutic mechanisms were further investigated by a target-organ location method. Finally, a translational integrating pathway approach was applied to extract the diseases-therapeutic modules for understanding the complex disease and its therapy at systems level. For the first time, the feature of the drug-target-pathway-organ-cooperations for treatment of multiple organ diseases in holistic medicine was revealed, facilitating the development of novel treatment paradigm for complex diseases in the future.
Wang, Yonghua; Zheng, Chunli; Huang, Chao; Li, Yan; Chen, Xuetong; Wu, Ziyin; Wang, Zhenzhong; Xiao, Wei; Zhang, Boli
2015-01-01
Holistic medicine is an interdisciplinary field of study that integrates all types of biological information (protein, small molecules, tissues, organs, external environmental signals, etc.) to lead to predictive and actionable models for health care and disease treatment. Despite the global and integrative character of this discipline, a comprehensive picture of holistic medicine for the treatment of complex diseases is still lacking. In this study, we develop a novel systems pharmacology approach to dissect holistic medicine in treating cardiocerebrovascular diseases (CCDs) by TCM (traditional Chinese medicine). Firstly, by applying the TCM active ingredients screened out by a systems-ADME process, we explored and experimentalized the signed drug-target interactions for revealing the pharmacological actions of drugs at a molecule level. Then, at a/an tissue/organ level, the drug therapeutic mechanisms were further investigated by a target-organ location method. Finally, a translational integrating pathway approach was applied to extract the diseases-therapeutic modules for understanding the complex disease and its therapy at systems level. For the first time, the feature of the drug-target-pathway-organ-cooperations for treatment of multiple organ diseases in holistic medicine was revealed, facilitating the development of novel treatment paradigm for complex diseases in the future. PMID:26101539
Mediation Analysis with Multiple Mediators
VanderWeele, T.J.; Vansteelandt, S.
2014-01-01
Recent advances in the causal inference literature on mediation have extended traditional approaches to direct and indirect effects to settings that allow for interactions and non-linearities. In this paper, these approaches from causal inference are further extended to settings in which multiple mediators may be of interest. Two analytic approaches, one based on regression and one based on weighting are proposed to estimate the effect mediated through multiple mediators and the effects through other pathways. The approaches proposed here accommodate exposure-mediator interactions and, to a certain extent, mediator-mediator interactions as well. The methods handle binary or continuous mediators and binary, continuous or count outcomes. When the mediators affect one another, the strategy of trying to assess direct and indirect effects one mediator at a time will in general fail; the approach given in this paper can still be used. A characterization is moreover given as to when the sum of the mediated effects for multiple mediators considered separately will be equal to the mediated effect of all of the mediators considered jointly. The approach proposed in this paper is robust to unmeasured common causes of two or more mediators. PMID:25580377
Mediation Analysis with Multiple Mediators.
VanderWeele, T J; Vansteelandt, S
2014-01-01
Recent advances in the causal inference literature on mediation have extended traditional approaches to direct and indirect effects to settings that allow for interactions and non-linearities. In this paper, these approaches from causal inference are further extended to settings in which multiple mediators may be of interest. Two analytic approaches, one based on regression and one based on weighting are proposed to estimate the effect mediated through multiple mediators and the effects through other pathways. The approaches proposed here accommodate exposure-mediator interactions and, to a certain extent, mediator-mediator interactions as well. The methods handle binary or continuous mediators and binary, continuous or count outcomes. When the mediators affect one another, the strategy of trying to assess direct and indirect effects one mediator at a time will in general fail; the approach given in this paper can still be used. A characterization is moreover given as to when the sum of the mediated effects for multiple mediators considered separately will be equal to the mediated effect of all of the mediators considered jointly. The approach proposed in this paper is robust to unmeasured common causes of two or more mediators.
Jalava, Katri; Rintala, Hanna; Ollgren, Jukka; Maunula, Leena; Gomez-Alvarez, Vicente; Revez, Joana; Palander, Marja; Antikainen, Jenni; Kauppinen, Ari; Räsänen, Pia; Siponen, Sallamaari; Nyholm, Outi; Kyyhkynen, Aino; Hakkarainen, Sirpa; Merentie, Juhani; Pärnänen, Martti; Loginov, Raisa; Ryu, Hodon; Kuusi, Markku; Siitonen, Anja; Miettinen, Ilkka; Santo Domingo, Jorge W; Hänninen, Marja-Liisa; Pitkänen, Tarja
2014-01-01
Failures in the drinking water distribution system cause gastrointestinal outbreaks with multiple pathogens. A water distribution pipe breakage caused a community-wide waterborne outbreak in Vuorela, Finland, July 2012. We investigated this outbreak with advanced epidemiological and microbiological methods. A total of 473/2931 inhabitants (16%) responded to a web-based questionnaire. Water and patient samples were subjected to analysis of multiple microbial targets, molecular typing and microbial community analysis. Spatial analysis on the water distribution network was done and we applied a spatial logistic regression model. The course of the illness was mild. Drinking untreated tap water from the defined outbreak area was significantly associated with illness (RR 5.6, 95% CI 1.9-16.4) increasing in a dose response manner. The closer a person lived to the water distribution breakage point, the higher the risk of becoming ill. Sapovirus, enterovirus, single Campylobacter jejuni and EHEC O157:H7 findings as well as virulence genes for EPEC, EAEC and EHEC pathogroups were detected by molecular or culture methods from the faecal samples of the patients. EPEC, EAEC and EHEC virulence genes and faecal indicator bacteria were also detected in water samples. Microbial community sequencing of contaminated tap water revealed abundance of Arcobacter species. The polyphasic approach improved the understanding of the source of the infections, and aided to define the extent and magnitude of this outbreak.
Fu, Glenn K; Wilhelmy, Julie; Stern, David; Fan, H Christina; Fodor, Stephen P A
2014-03-18
We present a new approach for the sensitive detection and accurate quantitation of messenger ribonucleic acid (mRNA) gene transcripts in single cells. First, the entire population of mRNAs is encoded with molecular barcodes during reverse transcription. After amplification of the gene targets of interest, molecular barcodes are counted by sequencing or scored on a simple hybridization detector to reveal the number of molecules in the starting sample. Since absolute quantities are measured, calibration to standards is unnecessary, and many of the relative quantitation challenges such as polymerase chain reaction (PCR) bias are avoided. We apply the method to gene expression analysis of minute sample quantities and demonstrate precise measurements with sensitivity down to sub single-cell levels. The method is an easy, single-tube, end point assay utilizing standard thermal cyclers and PCR reagents. Accurate and precise measurements are obtained without any need for cycle-to-cycle intensity-based real-time monitoring or physical partitioning into multiple reactions (e.g., digital PCR). Further, since all mRNA molecules are encoded with molecular barcodes, amplification can be used to generate more material for multiple measurements and technical replicates can be carried out on limited samples. The method is particularly useful for small sample quantities, such as single-cell experiments. Digital encoding of cellular content preserves true abundance levels and overcomes distortions introduced by amplification.
A method for integrating multiple components in a decision support system
Donald Nute; Walter D. Potter; Zhiyuan Cheng; Mayukh Dass; Astrid Glende; Frederick Maierv; Cy Routh; Hajime Uchiyama; Jin Wang; Sarah Witzig; Mark Twery; Peter Knopp; Scott Thomasma; H. Michael Rauscher
2005-01-01
We present a flexible, extensible method for integrating multiple tools into a single large decision support system (DSS) using a forest ecosystem management DSS (NED-2) as an example. In our approach, a rich ontology for the target domain is developed and implemented in the internal data model for the DSS. Semi-autonomous agents control external components and...
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Stack, Edward C; Wang, Chichung; Roman, Kristin A; Hoyt, Clifford C
2014-11-01
Tissue sections offer the opportunity to understand a patient's condition, to make better prognostic evaluations and to select optimum treatments, as evidenced by the place pathology holds today in clinical practice. Yet, there is a wealth of information locked up in a tissue section that is only partially accessed, due mainly to the limitations of tools and methods. Often tissues are assessed primarily based on visual analysis of one or two proteins, or 2-3 DNA or RNA molecules. Even while analysis is still based on visual perception, image analysis is starting to address the variability of human perception. This is in contrast to measuring characteristics that are substantially out of reach of human perception, such as parameters revealed through co-expression, spatial relationships, heterogeneity, and low abundance molecules. What is not routinely accessed is the information revealed through simultaneous detection of multiple markers, the spatial relationships among cells and tissue in disease, and the heterogeneity now understood to be critical to developing effective therapeutic strategies. Our purpose here is to review and assess methods for multiplexed, quantitative, image analysis based approaches, using new multicolor immunohistochemistry methods, automated multispectral slide imaging, and advanced trainable pattern recognition software. A key aspect of our approach is presenting imagery in a workflow that engages the pathologist to utilize the strengths of human perception and judgment, while significantly expanding the range of metrics collectable from tissue sections and also provide a level of consistency and precision needed to support the complexities of personalized medicine. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NHEERL is conducting a demonstration project to develop tools and approaches for assessing the risks of multiple stressors to populations of piscivorous wildlife, leading to the development of risk-based criteria. Specifically, we are developing methods and approaches to assess...
Information retrieval pathways for health information exchange in multiple care settings.
Kierkegaard, Patrick; Kaushal, Rainu; Vest, Joshua R
2014-11-01
To determine which health information exchange (HIE) technologies and information retrieval pathways healthcare professionals relied on to meet their information needs in the context of laboratory test results, radiological images and reports, and medication histories. Primary data was collected over a 2-month period across 3 emergency departments, 7 primary care practices, and 2 public health clinics in New York state. Qualitative research methods were used to collect and analyze data from semi-structured interviews and participant observation. The study reveals that healthcare professionals used a complex combination of information retrieval pathways for HIE to obtain clinical information from external organizations. The choice for each approach was setting- and information-specific, but was also highly dynamic across users and their information needs. Our findings about the complex nature of information sharing in healthcare provide insights for informatics professionals about the usage of information; indicate the need for managerial support within each organization; and suggest approaches to improve systems for organizations and agencies working to expand HIE adoption.
NASA Astrophysics Data System (ADS)
Song, Yang; Laskay, Ünige A.; Vilcins, Inger-Marie E.; Barbour, Alan G.; Wysocki, Vicki H.
2015-11-01
Ticks are vectors for disease transmission because they are indiscriminant in their feeding on multiple vertebrate hosts, transmitting pathogens between their hosts. Identifying the hosts on which ticks have fed is important for disease prevention and intervention. We have previously shown that hemoglobin (Hb) remnants from a host on which a tick fed can be used to reveal the host's identity. For the present research, blood was collected from 33 bird species that are common in the U.S. as hosts for ticks but that have unknown Hb sequences. A top-down-assisted bottom-up mass spectrometry approach with a customized searching database, based on variability in known bird hemoglobin sequences, has been devised to facilitate fast and complete sequencing of hemoglobin from birds with unknown sequences. These hemoglobin sequences will be added to a hemoglobin database and used for tick host identification. The general approach has the potential to sequence any set of homologous proteins completely in a rapid manner.
NASA Astrophysics Data System (ADS)
Lisimenka, Aliaksandr; Kubicki, Adam
2017-02-01
A new spectral analysis technique is proposed for rhythmic bedform quantification, based on the 2D Fourier transform involving the calculation of a set of low-order spectral moments. The approach provides a tool for efficient quantification of bedform length and height as well as spatial crest-line alignment. Contrary to the conventional method, it not only describes the most energetic component of an undulating seabed surface but also retrieves information on its secondary structure without application of any band-pass filter of which the upper and lower cut-off frequencies are a priori unknown. Validation is based on bathymetric data collected in the main Vistula River mouth area (Przekop Wisły), Poland. This revealed two generations (distinct groups) of dunes which are migrating seawards along distinct paths, probably related to the hydrological regime of the river. The data enable the identification of dune divergence and convergence zones. The approach proved successful in the parameterisation of topographic roughness, an essential aspect in numerical modelling studies.
NASA Astrophysics Data System (ADS)
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-03-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.
Liu, Yijin; Meirer, Florian; Krest, Courtney M.; ...
2016-08-30
To understand how hierarchically structured functional materials operate, analytical tools are needed that can reveal small structural and chemical details in large sample volumes. Often, a single method alone is not sufficient to get a complete picture of processes happening at multiple length scales. Here we present a correlative approach combining three-dimensional X-ray imaging techniques at different length scales for the analysis of metal poisoning of an individual catalyst particle. The correlative nature of the data allowed establishing a macro-pore network model that interprets metal accumulations as a resistance to mass transport and can, by tuning the effect of metalmore » deposition, simulate the response of the network to a virtual ageing of the catalyst particle. In conclusion, the developed approach is generally applicable and provides an unprecedented view on dynamic changes in a material’s pore space, which is an essential factor in the rational design of functional porous materials.« less
Exploring the brain on multiple scales with correlative two-photon and light sheet microscopy
NASA Astrophysics Data System (ADS)
Silvestri, Ludovico; Allegra Mascaro, Anna Letizia; Costantini, Irene; Sacconi, Leonardo; Pavone, Francesco S.
2014-02-01
One of the unique features of the brain is that its activity cannot be framed in a single spatio-temporal scale, but rather spans many orders of magnitude both in space and time. A single imaging technique can reveal only a small part of this complex machinery. To obtain a more comprehensive view of brain functionality, complementary approaches should be combined into a correlative framework. Here, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, taking advantage of blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living thy1-GFP-M mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from the apical portion, the whole pyramidal neuron can then be segmented. The correlative approach presented here allows contextualizing within a three-dimensional anatomic framework the neurons whose dynamics have been observed with high detail in vivo.
A theory-based approach to nursing shared governance.
Joseph, M Lindell; Bogue, Richard J
2016-01-01
The discipline of nursing uses a general definition of shared governance. The discipline's lack of a specified theory with precepts and propositions contributes to persistent barriers in progress toward building evidence-based knowledge through systematic study. The purposes of this article were to describe the development and elements of a program theory approach for nursing shared governance implementation and to recommend further testing. Five studies using multiple methods are described using a structured framework. The studies led to the use of Lipsey's method of theory development for program implementation to develop a theory for shared governance for nursing. Nine competencies were verified to define nursing practice council effectiveness. Other findings reveal that nurse empowerment results from alignment between the competencies of self- directed work teams and the competencies of organizational leaders. Implementation of GEMS theory based nursing shared governance can advance goals at the individual, unit, department, and organization level. Advancing professional nursing practice requires that nursing concepts are systematically studied and then formalized for implementation. This article describes the development of a theoretical foundation for the systematic study and implementation of nursing shared governance. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
ProbFAST: Probabilistic functional analysis system tool.
Silva, Israel T; Vêncio, Ricardo Z N; Oliveira, Thiago Y K; Molfetta, Greice A; Silva, Wilson A
2010-03-30
The post-genomic era has brought new challenges regarding the understanding of the organization and function of the human genome. Many of these challenges are centered on the meaning of differential gene regulation under distinct biological conditions and can be performed by analyzing the Multiple Differential Expression (MDE) of genes associated with normal and abnormal biological processes. Currently MDE analyses are limited to usual methods of differential expression initially designed for paired analysis. We proposed a web platform named ProbFAST for MDE analysis which uses Bayesian inference to identify key genes that are intuitively prioritized by means of probabilities. A simulated study revealed that our method gives a better performance when compared to other approaches and when applied to public expression data, we demonstrated its flexibility to obtain relevant genes biologically associated with normal and abnormal biological processes. ProbFAST is a free accessible web-based application that enables MDE analysis on a global scale. It offers an efficient methodological approach for MDE analysis of a set of genes that are turned on and off related to functional information during the evolution of a tumor or tissue differentiation. ProbFAST server can be accessed at http://gdm.fmrp.usp.br/probfast.
ProbFAST: Probabilistic Functional Analysis System Tool
2010-01-01
Background The post-genomic era has brought new challenges regarding the understanding of the organization and function of the human genome. Many of these challenges are centered on the meaning of differential gene regulation under distinct biological conditions and can be performed by analyzing the Multiple Differential Expression (MDE) of genes associated with normal and abnormal biological processes. Currently MDE analyses are limited to usual methods of differential expression initially designed for paired analysis. Results We proposed a web platform named ProbFAST for MDE analysis which uses Bayesian inference to identify key genes that are intuitively prioritized by means of probabilities. A simulated study revealed that our method gives a better performance when compared to other approaches and when applied to public expression data, we demonstrated its flexibility to obtain relevant genes biologically associated with normal and abnormal biological processes. Conclusions ProbFAST is a free accessible web-based application that enables MDE analysis on a global scale. It offers an efficient methodological approach for MDE analysis of a set of genes that are turned on and off related to functional information during the evolution of a tumor or tissue differentiation. ProbFAST server can be accessed at http://gdm.fmrp.usp.br/probfast. PMID:20353576
Cacha, L A; Parida, S; Dehuri, S; Cho, S-B; Poznanski, R R
2016-12-01
The huge number of voxels in fMRI over time poses a major challenge to for effective analysis. Fast, accurate, and reliable classifiers are required for estimating the decoding accuracy of brain activities. Although machine-learning classifiers seem promising, individual classifiers have their own limitations. To address this limitation, the present paper proposes a method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification for application across multiple subjects. Similarly, the fuzzy integral (FI) approach has been employed as an efficient tool for combining different classifiers. The FI approach led to the development of a classifiers ensemble technique that performs better than any of the single classifier by reducing the misclassification, the bias, and the variance. The proposed method successfully classified the different cognitive states for multiple subjects with high accuracy of classification. Comparison of the performance improvement, while applying ensemble neural networks method, vs. that of the individual neural network strongly points toward the usefulness of the proposed method.
Medium-sized tandem repeats represent an abundant component of the Drosophila virilis genome.
Abdurashitov, Murat A; Gonchar, Danila A; Chernukhin, Valery A; Tomilov, Victor N; Tomilova, Julia E; Schostak, Natalia G; Zatsepina, Olga G; Zelentsova, Elena S; Evgen'ev, Michael B; Degtyarev, Sergey K H
2013-11-09
Previously, we developed a simple method for carrying out a restriction enzyme analysis of eukaryotic DNA in silico, based on the known DNA sequences of the genomes. This method allows the user to calculate lengths of all DNA fragments that are formed after a whole genome is digested at the theoretical recognition sites of a given restriction enzyme. A comparison of the observed peaks in distribution diagrams with the results from DNA cleavage using several restriction enzymes performed in vitro have shown good correspondence between the theoretical and experimental data in several cases. Here, we applied this approach to the annotated genome of Drosophila virilis which is extremely rich in various repeats. Here we explored the combined approach to perform the restriction analysis of D. virilis DNA. This approach enabled to reveal three abundant medium-sized tandem repeats within the D. virilis genome. While the 225 bp repeats were revealed previously in intergenic non-transcribed spacers between ribosomal genes of D. virilis, two other families comprised of 154 bp and 172 bp repeats were not described. Tandem Repeats Finder search demonstrated that 154 bp and 172 bp units are organized in multiple clusters in the genome of D. virilis. Characteristically, only 154 bp repeats derived from Helitron transposon are transcribed. Using in silico digestion in combination with conventional restriction analysis and sequencing of repeated DNA fragments enabled us to isolate and characterize three highly abundant families of medium-sized repeats present in the D. virilis genome. These repeats comprise a significant portion of the genome and may have important roles in genome function and structural integrity. Therefore, we demonstrated an approach which makes possible to investigate in detail the gross arrangement and expression of medium-sized repeats basing on sequencing data even in the case of incompletely assembled and/or annotated genomes.
Nonclassical light revealed by the joint statistics of simultaneous measurements.
Luis, Alfredo
2016-04-15
Nonclassicality cannot be a single-observable property, since the statistics of any quantum observable is compatible with classical physics. We develop a general procedure to reveal nonclassical behavior of light states from the joint statistics arising in the practical measurement of multiple observables. Beside embracing previous approaches, this protocol can disclose nonclassical features for standard examples of classical-like behavior, such as SU(2) and Glauber coherent states. When combined with other criteria, this would imply that every light state is nonclassical.
A Mixed-Methods Exploration of an Environment for Learning Computer Programming
ERIC Educational Resources Information Center
Mather, Richard
2015-01-01
A mixed-methods approach is evaluated for exploring collaborative behaviour, acceptance and progress surrounding an interactive technology for learning computer programming. A review of literature reveals a compelling case for using mixed-methods approaches when evaluating technology-enhanced-learning environments. Here, ethnographic approaches…
Carlisle, Aaron B.; Goldman, Kenneth J.; Litvin, Steven Y.; Madigan, Daniel J.; Bigman, Jennifer S.; Swithenbank, Alan M.; Kline, Thomas C.; Block, Barbara A.
2015-01-01
Ontogenetic changes in habitat are driven by shifting life-history requirements and play an important role in population dynamics. However, large portions of the life history of many pelagic species are still poorly understood or unknown. We used a novel combination of stable isotope analysis of vertebral annuli, Bayesian mixing models, isoscapes and electronic tag data to reconstruct ontogenetic patterns of habitat and resource use in a pelagic apex predator, the salmon shark (Lamna ditropis). Results identified the North Pacific Transition Zone as the major nursery area for salmon sharks and revealed an ontogenetic shift around the age of maturity from oceanic to increased use of neritic habitats. The nursery habitat may reflect trade-offs between prey availability, predation pressure and thermal constraints on juvenile endothermic sharks. The ontogenetic shift in habitat coincided with a reduction of isotopic niche, possibly reflecting specialization upon particular prey or habitats. Using tagging data to inform Bayesian isotopic mixing models revealed that adult sharks primarily use neritic habitats of Alaska yet receive a trophic subsidy from oceanic habitats. Integrating the multiple methods used here provides a powerful approach to retrospectively study the ecology and life history of migratory species throughout their ontogeny. PMID:25621332
Methods for measuring denitrification: Diverse approaches to a difficult problem
Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.
2006-01-01
Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.
Mango: multiple alignment with N gapped oligos.
Zhang, Zefeng; Lin, Hao; Li, Ming
2008-06-01
Multiple sequence alignment is a classical and challenging task. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state-of-the-art works suffer from the "once a gap, always a gap" phenomenon. Is there a radically new way to do multiple sequence alignment? In this paper, we introduce a novel and orthogonal multiple sequence alignment method, using both multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole and tries to build the alignment vertically, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds have proved significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks, showing that MANGO compares favorably, in both accuracy and speed, against state-of-the-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, ProbConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0, and Kalign 2.0. We have further demonstrated the scalability of MANGO on very large datasets of repeat elements. MANGO can be downloaded at http://www.bioinfo.org.cn/mango/ and is free for academic usage.
Agopian, A J; Evans, Jane A; Lupo, Philip J
2018-01-15
It is estimated that 20 to 30% of infants with birth defects have two or more birth defects. Among these infants with multiple congenital anomalies (MCA), co-occurring anomalies may represent either chance (i.e., unrelated etiologies) or pathogenically associated patterns of anomalies. While some MCA patterns have been recognized and described (e.g., known syndromes), others have not been identified or characterized. Elucidating these patterns may result in a better understanding of the etiologies of these MCAs. This article reviews the literature with regard to analytic methods that have been used to evaluate patterns of MCAs, in particular those using birth defect registry data. A popular method for MCA assessment involves a comparison of the observed to expected ratio for a given combination of MCAs, or one of several modified versions of this comparison. Other methods include use of numerical taxonomy or other clustering techniques, multiple regression analysis, and log-linear analysis. Advantages and disadvantages of these approaches, as well as specific applications, were outlined. Despite the availability of multiple analytic approaches, relatively few MCA combinations have been assessed. The availability of large birth defects registries and computing resources that allow for automated, big data strategies for prioritizing MCA patterns may provide for new avenues for better understanding co-occurrence of birth defects. Thus, the selection of an analytic approach may depend on several considerations. Birth Defects Research 110:5-11, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
2017-01-01
Analyzing lipid composition and distribution within the brain is important to study white matter pathologies that present focal demyelination lesions, such as multiple sclerosis. Some lesions can endogenously re-form myelin sheaths. Therapies aim to enhance this repair process in order to reduce neurodegeneration and disability progression in patients. In this context, a lipidomic analysis providing both precise molecular classification and well-defined localization is crucial to detect changes in myelin lipid content. Here we develop a correlated heterospectral lipidomic (HSL) approach based on coregistered Raman spectroscopy, desorption electrospray ionization mass spectrometry (DESI-MS), and immunofluorescence imaging. We employ HSL to study the structural and compositional lipid profile of demyelination and remyelination in an induced focal demyelination mouse model and in multiple sclerosis lesions from patients ex vivo. Pixelwise coregistration of Raman spectroscopy and DESI-MS imaging generated a heterospectral map used to interrelate biomolecular structure and composition of myelin. Multivariate regression analysis enabled Raman-based assessment of highly specific lipid subtypes in complex tissue for the first time. This method revealed the temporal dynamics of remyelination and provided the first indication that newly formed myelin has a different lipid composition compared to normal myelin. HSL enables detailed molecular myelin characterization that can substantially improve upon the current understanding of remyelination in multiple sclerosis and provides a strategy to assess remyelination treatments in animal models. PMID:29392175
Broessner, Gregor; Beer, Ronny; Franz, Gerhard; Lackner, Peter; Engelhardt, Klaus; Brenneis, Christian; Pfausler, Bettina; Schmutzhard, Erich
2005-01-01
Introduction We report the case of a patient who developed a severe post-exertional heat stroke with consecutive multiple organ dysfunction resistant to conventional antipyretic treatment, necessitating the use of a novel endovascular device to combat hyperthermia and maintain normothermia. Methods A 38-year-old male suffering from severe heat stroke with predominant signs and symptoms of encephalopathy requiring acute admission to an intensive care unit, was admitted to a ten-bed neurological intensive care unit of a tertiary care hospital. The patient developed consecutive multiple organ dysfunction with rhabdomyolysis, and hepatic and respiratory failure. Temperature elevation was resistant to conventional treatment measures. Aggressive intensive care treatment included forced diuresis and endovascular cooling to combat hyperthermia and maintain normothermia. Results Analyses of serum revealed elevation of proinflammatory cytokines (TNF alpha, IL-6), cytokines (IL-2R), anti-inflammatory cytokines (IL-4) and chemokines (IL-8) as well as signs of rhabdomyolysis and hepatic failure. Aggressive intensive care treatment as forced diuresis and endovascular cooling (CoolGard® and CoolLine®) to combat hyperthermia and maintain normothermia were used successfully to treat this severe heat stroke. Conclusion In this case of severe heat stroke, presenting with multiple organ dysfunction and elevation of cytokines and chemokines, which was resistant to conventional cooling therapies, endovascular cooling may have contributed significantly to the reduction of body temperature and, possibly, avoided a fatal result. PMID:16285034
What have we learned about GPER function in physiology and disease from knockout mice?
Prossnitz, Eric R.; Hathaway, Helen J.
2015-01-01
Estrogens, predominantly 17β-estradiol, exert diverse effects throughout the body in both normal and patho-physiology, during development and in reproductive, metabolic, endocrine, cardiovascular, nervous, musculoskeletal and immune systems. Estrogen and its receptors also play important roles in carcinogenesis and therapy, particularly for breast cancer. In addition to the classical nuclear estrogen receptors (ERα and ERβ) that traditionally mediate predominantly genomic signaling, the G protein-coupled estrogen receptor GPER has become recognized as a critical mediator of rapid signaling in response to estrogen. Mouse models, and in particular knockout (KO) mice, represent an important approach to understand the functions of receptors in normal physiology and disease. Whereas ERα KO mice display multiple significant defects in reproduction and mammary gland development, ERβ KO phenotypes are more limited, and GPER KO exhibit no reproductive deficits. However, the study of GPER KO mice over the last six years has revealed that GPER deficiency results in multiple physiological alterations including obesity, cardiovascular dysfunction, insulin resistance and glucose intolerance. In addition, the lack of estrogen-mediated effects in numerous tissues of GPER KO mice, studied in vivo or ex vivo, including those of the cardiovascular, endocrine, nervous and immune systems, reveals GPER as a genuine mediator of estrogen action. Importantly, GPER KO mice have also revealed roles for GPER in breast carcinogenesis and metastasis. In combination with the supporting effects of GPER-selective ligands and GPER knockdown approaches, GPER KO mice demonstrate the therapeutic potential of targeting GPER activity in diseases as diverse as obesity, diabetes, multiple sclerosis, hypertension, atherosclerosis, myocardial infarction, stroke and cancer. PMID:26189910
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Successful Latina Scientists and Engineers: Their Lived Mentoring Experiences and Career Development
ERIC Educational Resources Information Center
San Miguel, Anitza M.; Kim, Mikyong Minsun
2015-01-01
Utilizing a phenomenological perspective and method, this study aimed to reveal the lived career mentoring experiences of Latinas in science and engineering and to understand how selected Latina scientists and engineers achieved high-level positions. Our in-depth interviews revealed that (a) it is important to have multiple mentors for Latinas'…
Most DNA-based microbial source tracking (MST) approaches target host-associated organisms within the order Bacteroidales, but human and other animal gut microbiota contain an array of other taxonomic groups that might serve as indicators for sources of fecal pollution. High thr...
Liu, Jing; Li, Yongping; Huang, Guohe; Fu, Haiyan; Zhang, Junlong; Cheng, Guanhui
2017-06-01
In this study, a multi-level-factorial risk-inference-based possibilistic-probabilistic programming (MRPP) method is proposed for supporting water quality management under multiple uncertainties. The MRPP method can handle uncertainties expressed as fuzzy-random-boundary intervals, probability distributions, and interval numbers, and analyze the effects of uncertainties as well as their interactions on modeling outputs. It is applied to plan water quality management in the Xiangxihe watershed. Results reveal that a lower probability of satisfying the objective function (θ) as well as a higher probability of violating environmental constraints (q i ) would correspond to a higher system benefit with an increased risk of violating system feasibility. Chemical plants are the major contributors to biological oxygen demand (BOD) and total phosphorus (TP) discharges; total nitrogen (TN) would be mainly discharged by crop farming. It is also discovered that optimistic decision makers should pay more attention to the interactions between chemical plant and water supply, while decision makers who possess a risk-averse attitude would focus on the interactive effect of q i and benefit of water supply. The findings can help enhance the model's applicability and identify a suitable water quality management policy for environmental sustainability according to the practical situations.
Maimaiti, Aili; Holzmann, Daniela; Truong, Viet Giang; Ritsch, Helmut; Nic Chormaic, Síle
2016-01-01
Particles trapped in the evanescent field of an ultrathin optical fibre interact over very long distances via multiple scattering of the fibre-guided fields. In ultrathin fibres that support higher order modes, these interactions are stronger and exhibit qualitatively new behaviour due to the coupling of different fibre modes, which have different propagation wave-vectors, by the particles. Here, we study one dimensional longitudinal optical binding interactions of chains of 3 μm polystyrene spheres under the influence of the evanescent fields of a two-mode microfibre. The observation of long-range interactions, self-ordering and speed variation of particle chains reveals strong optical binding effects between the particles that can be modelled well by a tritter scattering-matrix approach. The optical forces, optical binding interactions and the velocity of bounded particle chains are calculated using this method. Results show good agreement with finite element numerical simulations. Experimental data and theoretical analysis show that higher order modes in a microfibre offer a promising method to not only obtain stable, multiple particle trapping or faster particle propulsion speeds, but that they also allow for better control over each individual trapped object in particle ensembles near the microfibre surface. PMID:27451935
Witcher, Chad S G; McGannon, Kerry R; Hernandez, Paul; Dechman, Gail; Ferrier, Suzanne; Spence, John C; Rhodes, Ryan E; Blanchard, Chris M
2015-11-01
Exercise training within the pulmonary rehabilitation (PR) context is considered the most effective strategy to reduce COPD symptoms. However, participation in PR and continued exercise training following program completion are low. Previous research examined factors related to attendance and adherence, but the knowledge base to date has been limited to quantitative findings that focus solely on participants diagnosed with COPD. In addition to quantitative research, exploring multiple perspectives (eg, PR participants, significant others, staff, and stakeholders) using qualitative research methods opens a window of additional understanding. The goal of this study was to obtain multiple perspectives on PR to gain insight into factors that affect exercise participation among individuals diagnosed with COPD. A total of 26 participants were interviewed via telephone, including 8 individuals diagnosed with COPD (4 men and 4 women, mean age of 67 [range of 58-77] y), 4 family members, 11 PR staff, and 3 community stakeholders. Analysis revealed 3 themes: task self-efficacy for exercise, provision of support and encouragement, and perceptions of gender differences. Despite initial concerns, individuals diagnosed with COPD reported becoming more confident during PR and emphasized the importance of being supported by staff. PR staff perceived that men tended to approach exercise in a more eager and aggressive manner compared with women, who were more cautious and hesitant. In addition to enhancing task self-efficacy, findings suggest that exercise participation and adherence within the PR environment may be improved by adopting a gender-tailored approach. Copyright © 2015 by Daedalus Enterprises.
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
Chiba, Shuntaro; Ikeda, Kazuyoshi; Ishida, Takashi; Gromiha, M Michael; Taguchi, Y-H; Iwadate, Mitsuo; Umeyama, Hideaki; Hsin, Kun-Yi; Kitano, Hiroaki; Yamamoto, Kazuki; Sugaya, Nobuyoshi; Kato, Koya; Okuno, Tatsuya; Chikenji, George; Mochizuki, Masahiro; Yasuo, Nobuaki; Yoshino, Ryunosuke; Yanagisawa, Keisuke; Ban, Tomohiro; Teramoto, Reiji; Ramakrishnan, Chandrasekaran; Thangakani, A Mary; Velmurugan, D; Prathipati, Philip; Ito, Junichi; Tsuchiya, Yuko; Mizuguchi, Kenji; Honma, Teruki; Hirokawa, Takatsugu; Akiyama, Yutaka; Sekijima, Masakazu
2015-11-26
A search of broader range of chemical space is important for drug discovery. Different methods of computer-aided drug discovery (CADD) are known to propose compounds in different chemical spaces as hit molecules for the same target protein. This study aimed at using multiple CADD methods through open innovation to achieve a level of hit molecule diversity that is not achievable with any particular single method. We held a compound proposal contest, in which multiple research groups participated and predicted inhibitors of tyrosine-protein kinase Yes. This showed whether collective knowledge based on individual approaches helped to obtain hit compounds from a broad range of chemical space and whether the contest-based approach was effective.
Chiba, Shuntaro; Ikeda, Kazuyoshi; Ishida, Takashi; Gromiha, M. Michael; Taguchi, Y-h.; Iwadate, Mitsuo; Umeyama, Hideaki; Hsin, Kun-Yi; Kitano, Hiroaki; Yamamoto, Kazuki; Sugaya, Nobuyoshi; Kato, Koya; Okuno, Tatsuya; Chikenji, George; Mochizuki, Masahiro; Yasuo, Nobuaki; Yoshino, Ryunosuke; Yanagisawa, Keisuke; Ban, Tomohiro; Teramoto, Reiji; Ramakrishnan, Chandrasekaran; Thangakani, A. Mary; Velmurugan, D.; Prathipati, Philip; Ito, Junichi; Tsuchiya, Yuko; Mizuguchi, Kenji; Honma, Teruki; Hirokawa, Takatsugu; Akiyama, Yutaka; Sekijima, Masakazu
2015-01-01
A search of broader range of chemical space is important for drug discovery. Different methods of computer-aided drug discovery (CADD) are known to propose compounds in different chemical spaces as hit molecules for the same target protein. This study aimed at using multiple CADD methods through open innovation to achieve a level of hit molecule diversity that is not achievable with any particular single method. We held a compound proposal contest, in which multiple research groups participated and predicted inhibitors of tyrosine-protein kinase Yes. This showed whether collective knowledge based on individual approaches helped to obtain hit compounds from a broad range of chemical space and whether the contest-based approach was effective. PMID:26607293
ERIC Educational Resources Information Center
Whitt, Elizabeth J.; Kuh, George D.
A team approach to the use of qualitative methods in a study of high quality out-of-class experiences for undergraduate students at 14 institutions of higher education is described. Four research questions examined: the physical and organizational features characterizing the institutions; the nature of institutional policies related to…
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
Multiplexed Sequence Encoding: A Framework for DNA Communication.
Zakeri, Bijan; Carr, Peter A; Lu, Timothy K
2016-01-01
Synthetic DNA has great propensity for efficiently and stably storing non-biological information. With DNA writing and reading technologies rapidly advancing, new applications for synthetic DNA are emerging in data storage and communication. Traditionally, DNA communication has focused on the encoding and transfer of complete sets of information. Here, we explore the use of DNA for the communication of short messages that are fragmented across multiple distinct DNA molecules. We identified three pivotal points in a communication-data encoding, data transfer & data extraction-and developed novel tools to enable communication via molecules of DNA. To address data encoding, we designed DNA-based individualized keyboards (iKeys) to convert plaintext into DNA, while reducing the occurrence of DNA homopolymers to improve synthesis and sequencing processes. To address data transfer, we implemented a secret-sharing system-Multiplexed Sequence Encoding (MuSE)-that conceals messages between multiple distinct DNA molecules, requiring a combination key to reveal messages. To address data extraction, we achieved the first instance of chromatogram patterning through multiplexed sequencing, thereby enabling a new method for data extraction. We envision these approaches will enable more widespread communication of information via DNA.
Buenconsejo, Pio John S; Siegel, Alexander; Savan, Alan; Thienhaus, Sigurd; Ludwig, Alfred
2012-01-09
For different areas of combinatorial materials science, it is desirable to have multiple materials libraries: especially for irreversible high-throughput studies, like, for example, corrosion resistance testing in different media or annealing of complete materials libraries at different temperatures. Therefore a new combinatorial sputter-deposition process was developed which yields 24 materials libraries in one experiment on a single substrate. It is discussed with the example of 24 Ti-Ni-Ag materials libraries. They are divided based on the composition coverage and orientation of composition gradient into two sets of 12 nearly identical materials libraries. Each materials library covers at least 30-40% of the complete ternary composition range. An acid etch test in buffered-HF solution was performed, illustrating the feasibility of our approach for destructive materials characterization. The results revealed that within the composition range of Ni < 30 at.%, the films were severely etched. The composition range which shows reversible martensitic transformations was confirmed to be outside this region. The high output of the present method makes it attractive for combinatorial studies requiring multiple materials libraries.
A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.
Yang, Shaofu; Liu, Qingshan; Wang, Jun
2018-04-01
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets.
Carrig, Madeline M; Manrique-Vallier, Daniel; Ranby, Krista W; Reiter, Jerome P; Hoyle, Rick H
2015-01-01
Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches.
A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets
Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.
2015-01-01
Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437
NASA Astrophysics Data System (ADS)
Yang, Yujie; Dong, Di; Shi, Liangliang; Wang, Jun; Yang, Xin; Tian, Jie
2015-03-01
Optical projection tomography (OPT) is a mesoscopic scale optical imaging technique for specimens between 1mm and 10mm. OPT has been proven to be immensely useful in a wide variety of biological applications, such as developmental biology and pathology, but its shortcomings in imaging specimens containing widely differing contrast elements are obvious. The longer exposure for high intensity tissues may lead to over saturation of other areas, whereas a relatively short exposure may cause similarity with surrounding background. In this paper, we propose an approach to make a trade-off between capturing weak signals and revealing more details for OPT imaging. This approach consists of three steps. Firstly, the specimens are merely scanned in 360 degrees above a normal exposure but non-overexposure to acquire the projection data. This reduces the photo bleaching and pre-registration computation compared with multiple different exposures in conventional high dynamic range (HDR) imaging method. Secondly, three virtual channels are produced for each projection image based on the histogram distribution to simulate the low, normal and high exposure images used in the traditional HDR technology in photography. Finally, each virtual channel is normalized to the full gray scale range and three channels are recombined into one image using weighting coefficients optimized by a standard eigen-decomposition method. After applying our approach on the projection data, filtered back projection (FBP) algorithm is carried out for 3-dimentional reconstruction. The neonatal wild-type mouse paw has been scanned to verify this approach. Results demonstrated the effectiveness of the proposed approach.
Case Study Research Methodology in Nursing Research.
Cope, Diane G
2015-11-01
Through data collection methods using a holistic approach that focuses on variables in a natural setting, qualitative research methods seek to understand participants' perceptions and interpretations. Common qualitative research methods include ethnography, phenomenology, grounded theory, and historic research. Another type of methodology that has a similar qualitative approach is case study research, which seeks to understand a phenomenon or case from multiple perspectives within a given real-world context.
Musoke, David; Miiro, George; Karani, George; Morris, Keith; Kasasa, Simon; Ndejjo, Rawlance; Nakiyingi-Miiro, Jessica; Guwatudde, David; Musoke, Miph Boses
2015-01-01
Background The World Health Organization recommends use of multiple approaches to control malaria. The integrated approach to malaria prevention advocates the use of several malaria prevention methods in a holistic manner. This study assessed perceptions and practices on integrated malaria prevention in Wakiso district, Uganda. Methods A clustered cross-sectional survey was conducted among 727 households from 29 villages using both quantitative and qualitative methods. Assessment was done on awareness of various malaria prevention methods, potential for use of the methods in a holistic manner, and reasons for dislike of certain methods. Households were classified as using integrated malaria prevention if they used at least two methods. Logistic regression was used to test for factors associated with the use of integrated malaria prevention while adjusting for clustering within villages. Results Participants knew of the various malaria prevention methods in the integrated approach including use of insecticide treated nets (97.5%), removing mosquito breeding sites (89.1%), clearing overgrown vegetation near houses (97.9%), and closing windows and doors early in the evenings (96.4%). If trained, most participants (68.6%) would use all the suggested malaria prevention methods of the integrated approach. Among those who would not use all methods, the main reasons given were there being too many (70.2%) and cost (32.0%). Only 33.0% households were using the integrated approach to prevent malaria. Use of integrated malaria prevention by households was associated with reading newspapers (AOR 0.34; 95% CI 0.22 –0.53) and ownership of a motorcycle/car (AOR 1.75; 95% CI 1.03 – 2.98). Conclusion Although knowledge of malaria prevention methods was high and perceptions on the integrated approach promising, practices on integrated malaria prevention was relatively low. The use of the integrated approach can be improved by promoting use of multiple malaria prevention methods through various communication channels such as mass media. PMID:25837978
Predicting flight delay based on multiple linear regression
NASA Astrophysics Data System (ADS)
Ding, Yi
2017-08-01
Delay of flight has been regarded as one of the toughest difficulties in aviation control. How to establish an effective model to handle the delay prediction problem is a significant work. To solve the problem that the flight delay is difficult to predict, this study proposes a method to model the arriving flights and a multiple linear regression algorithm to predict delay, comparing with Naive-Bayes and C4.5 approach. Experiments based on a realistic dataset of domestic airports show that the accuracy of the proposed model approximates 80%, which is further improved than the Naive-Bayes and C4.5 approach approaches. The result testing shows that this method is convenient for calculation, and also can predict the flight delays effectively. It can provide decision basis for airport authorities.
Resolving Recent Plant Radiations: Power and Robustness of Genotyping-by-Sequencing.
Fernández-Mazuecos, Mario; Mellers, Greg; Vigalondo, Beatriz; Sáez, Llorenç; Vargas, Pablo; Glover, Beverley J
2018-03-01
Disentangling species boundaries and phylogenetic relationships within recent evolutionary radiations is a challenge due to the poor morphological differentiation and low genetic divergence between species, frequently accompanied by phenotypic convergence, interspecific gene flow and incomplete lineage sorting. Here we employed a genotyping-by-sequencing (GBS) approach, in combination with morphometric analyses, to investigate a small western Mediterranean clade in the flowering plant genus Linaria that radiated in the Quaternary. After confirming the morphological and genetic distinctness of eight species, we evaluated the relative performances of concatenation and coalescent methods to resolve phylogenetic relationships. Specifically, we focused on assessing the robustness of both approaches to variations in the parameter used to estimate sequence homology (clustering threshold). Concatenation analyses suffered from strong systematic bias, as revealed by the high statistical support for multiple alternative topologies depending on clustering threshold values. By contrast, topologies produced by two coalescent-based methods (NJ$_{\\mathrm{st}}$, SVDquartets) were robust to variations in the clustering threshold. Reticulate evolution may partly explain incongruences between NJ$_{\\mathrm{st}}$, SVDquartets and concatenated trees. Integration of morphometric and coalescent-based phylogenetic results revealed (i) extensive morphological divergence associated with recent splits between geographically close or sympatric sister species and (ii) morphological convergence in geographically disjunct species. These patterns are particularly true for floral traits related to pollinator specialization, including nectar spur length, tube width and corolla color, suggesting pollinator-driven diversification. Given its relatively simple and inexpensive implementation, GBS is a promising technique for the phylogenetic and systematic study of recent radiations, but care must be taken to evaluate the robustness of results to variation of data assembly parameters.
Multi-PSF fusion in image restoration of range-gated systems
NASA Astrophysics Data System (ADS)
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui
2018-07-01
For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.
Kreitz, Silke; de Celis Alonso, Benito; Uder, Michael; Hess, Andreas
2018-01-01
Resting state (RS) connectivity has been increasingly studied in healthy and diseased brains in humans and animals. This paper presents a new method to analyze RS data from fMRI that combines multiple seed correlation analysis with graph-theory (MSRA). We characterize and evaluate this new method in relation to two other graph-theoretical methods and ICA. The graph-theoretical methods calculate cross-correlations of regional average time-courses, one using seed regions of the same size (SRCC) and the other using whole brain structure regions (RCCA). We evaluated the reproducibility, power, and capacity of these methods to characterize short-term RS modulation to unilateral physiological whisker stimulation in rats. Graph-theoretical networks found with the MSRA approach were highly reproducible, and their communities showed large overlaps with ICA components. Additionally, MSRA was the only one of all tested methods that had the power to detect significant RS modulations induced by whisker stimulation that are controlled by family-wise error rate (FWE). Compared to the reduced resting state network connectivity during task performance, these modulations implied decreased connectivity strength in the bilateral sensorimotor and entorhinal cortex. Additionally, the contralateral ventromedial thalamus (part of the barrel field related lemniscal pathway) and the hypothalamus showed reduced connectivity. Enhanced connectivity was observed in the amygdala, especially the contralateral basolateral amygdala (involved in emotional learning processes). In conclusion, MSRA is a powerful analytical approach that can reliably detect tiny modulations of RS connectivity. It shows a great promise as a method for studying RS dynamics in healthy and pathological conditions.
Kreitz, Silke; de Celis Alonso, Benito; Uder, Michael; Hess, Andreas
2018-01-01
Resting state (RS) connectivity has been increasingly studied in healthy and diseased brains in humans and animals. This paper presents a new method to analyze RS data from fMRI that combines multiple seed correlation analysis with graph-theory (MSRA). We characterize and evaluate this new method in relation to two other graph-theoretical methods and ICA. The graph-theoretical methods calculate cross-correlations of regional average time-courses, one using seed regions of the same size (SRCC) and the other using whole brain structure regions (RCCA). We evaluated the reproducibility, power, and capacity of these methods to characterize short-term RS modulation to unilateral physiological whisker stimulation in rats. Graph-theoretical networks found with the MSRA approach were highly reproducible, and their communities showed large overlaps with ICA components. Additionally, MSRA was the only one of all tested methods that had the power to detect significant RS modulations induced by whisker stimulation that are controlled by family-wise error rate (FWE). Compared to the reduced resting state network connectivity during task performance, these modulations implied decreased connectivity strength in the bilateral sensorimotor and entorhinal cortex. Additionally, the contralateral ventromedial thalamus (part of the barrel field related lemniscal pathway) and the hypothalamus showed reduced connectivity. Enhanced connectivity was observed in the amygdala, especially the contralateral basolateral amygdala (involved in emotional learning processes). In conclusion, MSRA is a powerful analytical approach that can reliably detect tiny modulations of RS connectivity. It shows a great promise as a method for studying RS dynamics in healthy and pathological conditions. PMID:29875622
Chou, Wen-Chi; Ma, Qin; Yang, Shihui; ...
2015-03-12
The identification of transcription units (TUs) encoded in a bacterial genome is essential to elucidation of transcriptional regulation of the organism. To gain a detailed understanding of the dynamically composed TU structures, we have used four strand-specific RNA-seq (ssRNA-seq) datasets collected under two experimental conditions to derive the genomic TU organization of Clostridium thermocellum using a machine-learning approach. Our method accurately predicted the genomic boundaries of individual TUs based on two sets of parameters measuring the RNA-seq expression patterns across the genome: expression-level continuity and variance. A total of 2590 distinct TUs are predicted based on the four RNA-seq datasets.more » Moreover, among the predicted TUs, 44% have multiple genes. We assessed our prediction method on an independent set of RNA-seq data with longer reads. The evaluation confirmed the high quality of the predicted TUs. Functional enrichment analyses on a selected subset of the predicted TUs revealed interesting biology. To demonstrate the generality of the prediction method, we have also applied the method to RNA-seq data collected on Escherichia coli and achieved high prediction accuracies. The TU prediction program named SeqTU is publicly available athttps://code.google.com/p/seqtu/. We expect that the predicted TUs can serve as the baseline information for studying transcriptional and post-transcriptional regulation in C. thermocellum and other bacteria.« less
Genomic Data Quality Impacts Automated Detection of Lateral Gene Transfer in Fungi
Dupont, Pierre-Yves; Cox, Murray P.
2017-01-01
Lateral gene transfer (LGT, also known as horizontal gene transfer), an atypical mechanism of transferring genes between species, has almost become the default explanation for genes that display an unexpected composition or phylogeny. Numerous methods of detecting LGT events all rely on two fundamental strategies: primary structure composition or gene tree/species tree comparisons. Discouragingly, the results of these different approaches rarely coincide. With the wealth of genome data now available, detection of laterally transferred genes is increasingly being attempted in large uncurated eukaryotic datasets. However, detection methods depend greatly on the quality of the underlying genomic data, which are typically complex for eukaryotes. Furthermore, given the automated nature of genomic data collection, it is typically impractical to manually verify all protein or gene models, orthology predictions, and multiple sequence alignments, requiring researchers to accept a substantial margin of error in their datasets. Using a test case comprising plant-associated genomes across the fungal kingdom, this study reveals that composition- and phylogeny-based methods have little statistical power to detect laterally transferred genes. In particular, phylogenetic methods reveal extreme levels of topological variation in fungal gene trees, the vast majority of which show departures from the canonical species tree. Therefore, it is inherently challenging to detect LGT events in typical eukaryotic genomes. This finding is in striking contrast to the large number of claims for laterally transferred genes in eukaryotic species that routinely appear in the literature, and questions how many of these proposed examples are statistically well supported. PMID:28235827
Power control apparatus and methods for electric vehicles
Gadh, Rajit; Chung, Ching-Yen; Chu, Chi-Cheng; Qiu, Li
2016-03-22
Electric vehicle (EV) charging apparatus and methods are described which allow the sharing of charge current between multiple vehicles connected to a single source of charging energy. In addition, this charge sharing can be performed in a grid-friendly manner by lowering current supplied to EVs when necessary in order to satisfy the needs of the grid, or building operator. The apparatus and methods can be integrated into charging stations or can be implemented with a middle-man approach in which a multiple EV charging box, which includes an EV emulator and multiple pilot signal generation circuits, is coupled to a single EV charge station.
Ameringer, Suzanne; Erickson, Jeanne M; Macpherson, Catherine Fiona; Stegenga, Kristin; Linder, Lauri A
2015-12-01
Adolescents and young adults (AYAs) with cancer experience multiple distressing symptoms during treatment. Because the typical approach to symptom assessment does not easily reflect the symptom experience of individuals, alternative approaches to enhancing communication between the patient and provider are needed. We developed an iPad-based application that uses a heuristic approach to explore AYAs' cancer symptom experiences. In this mixed-methods descriptive study, 72 AYAs (13-29 years old) with cancer receiving myelosuppressive chemotherapy used the Computerized Symptom Capture Tool (C-SCAT) to create images of the symptoms and symptom clusters they experienced from a list of 30 symptoms. They answered open-ended questions within the C-SCAT about the causes of their symptoms and symptom clusters. The images generated through the C-SCAT and accompanying free-text data were analyzed using descriptive, content, and visual analyses. Most participants (n = 70) reported multiple symptoms (M = 8.14). The most frequently reported symptoms were nausea (65.3%), feeling drowsy (55.6%), lack of appetite (55.6%), and lack of energy (55.6%). Forty-six grouped their symptoms into one or more clusters. The most common symptom cluster was nausea/eating problems/appetite problems. Nausea was most frequently named as the priority symptom in a cluster and as a cause of other symptoms. Although common threads were present in the symptoms experienced by AYAs, the graphic images revealed unique perspectives and a range of complexity of symptom relationships, clusters, and causes. Results highlight the need for a tailored approach to symptom management based on how the AYA with cancer perceives his or her symptom experience. © 2015 Wiley Periodicals, Inc.
Lam, Winsome; Fowler, Cathrine; Dawson, Angela
2016-01-01
In Hong Kong, the population is at risk of seasonal influenza infection twice a year. Seasonal influenza is significantly associated with the increased hospitalization of children. Maintaining personal hygiene and vaccination are the most effective measures to prevent influenza infection. Research demonstrates a positive relationship between the health practices applied by parents and the behaviour of their children highlighting the importance of parental heath education. However, there is minimal research that provides an understanding of how Hong Kong Chinese parents teach their children to prevent seasonal influenza. Mixed methods research was undertaken that employed a multiple-case study approach to gain an understanding of parental teaching practices regarding seasonal influenza prevention. Purposive intensity sampling was adopted to recruit twenty parents and their healthy children. A thematic analysis was employed to examine the qualitative interview data and the quantitative survey data were examined descriptively. These data were then integrated to provide a more rigorous understanding of parental teaching strategies. Comparisons were made across cases to reveal commonalities and differences. Five major themes were identified: processes parents used to teach personal hygiene; parent-child interaction during teaching; approaches to managing children's health behaviours; enhancing children's healthy practices; and parents' perspective of the role of the nurse in health promotion. This study provided valuable insight into the approach of Hong Kong Chinese parents in teaching their children to prevent seasonal influenza. The results indicate that parents can be better supported to develop effective strategies to teach their preschool children hygiene practices for seasonal influenza prevention. Partnerships with community nurses can play a role in building effective parent-child interactions to enhance children's learning and adoption of healthy practices.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Zhu, Bo; Niu, Hong; Zhang, Wengang; Wang, Zezhao; Liang, Yonghu; Guan, Long; Guo, Peng; Chen, Yan; Zhang, Lupei; Guo, Yong; Ni, Heming; Gao, Xue; Gao, Huijiang; Xu, Lingyang; Li, Junya
2017-06-14
Fatty acid composition of muscle is an important trait contributing to meat quality. Recently, genome-wide association study (GWAS) has been extensively used to explore the molecular mechanism underlying important traits in cattle. In this study, we performed GWAS using high density SNP array to analyze the association between SNPs and fatty acids and evaluated the accuracy of genomic prediction for fatty acids in Chinese Simmental cattle. Using the BayesB method, we identified 35 and 7 regions in Chinese Simmental cattle that displayed significant associations with individual fatty acids and fatty acid groups, respectively. We further obtained several candidate genes which may be involved in fatty acid biosynthesis including elongation of very long chain fatty acids protein 5 (ELOVL5), fatty acid synthase (FASN), caspase 2 (CASP2) and thyroglobulin (TG). Specifically, we obtained strong evidence of association signals for one SNP located at 51.3 Mb for FASN using Genome-wide Rapid Association Mixed Model and Regression-Genomic Control (GRAMMAR-GC) approaches. Also, region-based association test identified multiple SNPs within FASN and ELOVL5 for C14:0. In addition, our result revealed that the effectiveness of genomic prediction for fatty acid composition using BayesB was slightly superior over GBLUP in Chinese Simmental cattle. We identified several significantly associated regions and loci which can be considered as potential candidate markers for genomics-assisted breeding programs. Using multiple methods, our results revealed that FASN and ELOVL5 are associated with fatty acids with strong evidence. Our finding also suggested that it is feasible to perform genomic selection for fatty acids in Chinese Simmental cattle.
Testing a Weather Generator for Downscaling Climate Change Projections over Switzerland
NASA Astrophysics Data System (ADS)
Keller, Denise E.; Fischer, Andreas M.; Liniger, Mark A.; Appenzeller, Christof; Knutti, Reto
2016-04-01
Climate information provided by global or regional climate models (RCMs) are often too coarse and prone to substantial biases, making it impossible to directly use daily time-series of the RCMs for local assessments and in climate impact models. Hence, statistical downscaling becomes necessary. For the Swiss National Climate Change Initiative (CH2011), a delta-change approach was used to provide daily climate projections at the local scale. This data have the main limitations that changes in variability, extremes and in the temporal structure, such as changes in the wet day frequency, are not reproduced. The latter is a considerable downside of the delta-change approach for many impact applications. In this regard, stochastic weather generators (WGs) are an appealing technique that allow the simulation of multiple realizations of synthetic weather sequences consistent with the locally observed weather statistics and its future changes. Here, we analyse a Richardson-type weather generator (WG) as an alternative method to downscale daily precipitation, minimum and maximum temperature. The WG is calibrated for 26 Swiss stations and the reference period 1980-2009. It is perturbed with change factors derived from 12 RCMs (ENSEMBLES) to represent the climate of 2070-2099 assuming the SRES A1B emission scenario. The WG can be run in multi-site mode, making it especially attractive for impact-modelers that rely on a realistic spatial structure in downscaled time-series. The results from the WG are benchmarked against the original delta-change approach that applies mean additive or multiplicative adjustments to the observations. According to both downscaling methods, the results reveal area-wide mean temperature increases and a precipitation decrease in summer, consistent with earlier studies. For the summer drying, the WG indicates primarily a decrease in wet-day frequency and correspondingly an increase in mean dry spell length by around 18% - 40% at low-elevation stations. By construction, these potential changes cannot be represented by a delta-change approach. In winter, both methods project a shortening of the frost period (-30 to -60 days) and a decrease of snow days (-20% to -100%). The WG demonstrates though, that almost present-day conditions in snow-days could still occur in the future. As expected, both methods have difficulties in representing extremes. If users focus on changes in temporal sequences and need a large number of future realizations that are spatially consistent, it is recommended to use data from a WG instead of a delta-change approach.
Multiple-Instance Regression with Structured Data
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran; Roper, Alex
2008-01-01
We present a multiple-instance regression algorithm that models internal bag structure to identify the items most relevant to the bag labels. Multiple-instance regression (MIR) operates on a set of bags with real-valued labels, each containing a set of unlabeled items, in which the relevance of each item to its bag label is unknown. The goal is to predict the labels of new bags from their contents. Unlike previous MIR methods, MI-ClusterRegress can operate on bags that are structured in that they contain items drawn from a number of distinct (but unknown) distributions. MI-ClusterRegress simultaneously learns a model of the bag's internal structure, the relevance of each item, and a regression model that accurately predicts labels for new bags. We evaluated this approach on the challenging MIR problem of crop yield prediction from remote sensing data. MI-ClusterRegress provided predictions that were more accurate than those obtained with non-multiple-instance approaches or MIR methods that do not model the bag structure.
Three-Dimensional Geometry of Collagenous Tissues by Second Harmonic Polarimetry.
Reiser, Karen; Stoller, Patrick; Knoesen, André
2017-06-01
Collagen is a biological macromolecule capable of second harmonic generation, allowing label-free detection in tissues; in addition, molecular orientation can be determined from the polarization dependence of the second harmonic signal. Previously we reported that in-plane orientation of collagen fibrils could be determined by modulating the polarization angle of the laser during scanning. We have now extended this method so that out-of-plane orientation angles can be determined at the same time, allowing visualization of the 3-dimensional structure of collagenous tissues. This approach offers advantages compared with other methods for determining out-of-plane orientation. First, the orientation angles are directly calculated from the polarimetry data obtained in a single scan, while other reported methods require data from multiple scans, use of iterative optimization methods, application of fitting algorithms, or extensive post-optical processing. Second, our method does not require highly specialized instrumentation, and thus can be adapted for use in almost any nonlinear optical microscopy setup. It is suitable for both basic and clinical applications. We present three-dimensional images of structurally complex collagenous tissues that illustrate the power of such 3-dimensional analyses to reveal the architecture of biological structures.
Three-Dimensional Geometry of Collagenous Tissues by Second Harmonic Polarimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiser, Karen; Stoller, Patrick; Knoesen, André
Collagen is a biological macromolecule capable of second harmonic generation, allowing label-free detection in tissues; in addition, molecular orientation can be determined from the polarization dependence of the second harmonic signal. Previously we reported that in-plane orientation of collagen fibrils could be determined by modulating the polarization angle of the laser during scanning. We have now extended this method so that out-of-plane orientation angles can be determined at the same time, allowing visualization of the 3-dimensional structure of collagenous tissues. This approach offers advantages compared with other methods for determining out-of-plane orientation. First, the orientation angles are directly calculated frommore » the polarimetry data obtained in a single scan, while other reported methods require data from multiple scans, use of iterative optimization methods, application of fitting algorithms, or extensive post-optical processing. Second, our method does not require highly specialized instrumentation, and thus can be adapted for use in almost any nonlinear optical microscopy setup. It is suitable for both basic and clinical applications. We present three-dimensional images of structurally complex collagenous tissues that illustrate the power of such 3-dimensional analyses to reveal the architecture of biological structures.« less
Three-Dimensional Geometry of Collagenous Tissues by Second Harmonic Polarimetry
Reiser, Karen; Stoller, Patrick; Knoesen, André
2017-06-01
Collagen is a biological macromolecule capable of second harmonic generation, allowing label-free detection in tissues; in addition, molecular orientation can be determined from the polarization dependence of the second harmonic signal. Previously we reported that in-plane orientation of collagen fibrils could be determined by modulating the polarization angle of the laser during scanning. We have now extended this method so that out-of-plane orientation angles can be determined at the same time, allowing visualization of the 3-dimensional structure of collagenous tissues. This approach offers advantages compared with other methods for determining out-of-plane orientation. First, the orientation angles are directly calculated frommore » the polarimetry data obtained in a single scan, while other reported methods require data from multiple scans, use of iterative optimization methods, application of fitting algorithms, or extensive post-optical processing. Second, our method does not require highly specialized instrumentation, and thus can be adapted for use in almost any nonlinear optical microscopy setup. It is suitable for both basic and clinical applications. We present three-dimensional images of structurally complex collagenous tissues that illustrate the power of such 3-dimensional analyses to reveal the architecture of biological structures.« less
Single-Molecule Studies of Actin Assembly and Disassembly Factors
Smith, Benjamin A.; Gelles, Jeff; Goode, Bruce L.
2014-01-01
The actin cytoskeleton is very dynamic and highly regulated by multiple associated proteins in vivo. Understanding how this system of proteins functions in the processes of actin network assembly and disassembly requires methods to dissect the mechanisms of activity of individual factors and of multiple factors acting in concert. The advent of single-filament and single-molecule fluorescence imaging methods has provided a powerful new approach to discovering actin-regulatory activities and obtaining direct, quantitative insights into the pathways of molecular interactions that regulate actin network architecture and dynamics. Here we describe techniques for acquisition and analysis of single-molecule data, applied to the novel challenges of studying the filament assembly and disassembly activities of actin-associated proteins in vitro. We discuss the advantages of single-molecule analysis in directly visualizing the order of molecular events, measuring the kinetic rates of filament binding and dissociation, and studying the coordination among multiple factors. The methods described here complement traditional biochemical approaches in elucidating actin-regulatory mechanisms in reconstituted filamentous networks. PMID:24630103
Non-animal methods to predict skin sensitization (II): an assessment of defined approaches *.
Kleinstreuer, Nicole C; Hoffmann, Sebastian; Alépée, Nathalie; Allen, David; Ashikaga, Takao; Casey, Warren; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Göbel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Strickland, Judy; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk
2018-05-01
Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.
Fitzgerald, Timothy L; Powell, Jonathan J; Stiller, Jiri; Weese, Terri L; Abe, Tomoko; Zhao, Guangyao; Jia, Jizeng; McIntyre, C Lynne; Li, Zhongyi; Manners, John M; Kazan, Kemal
2015-01-01
Reverse genetic techniques harnessing mutational approaches are powerful tools that can provide substantial insight into gene function in plants. However, as compared to diploid species, reverse genetic analyses in polyploid plants such as bread wheat can present substantial challenges associated with high levels of sequence and functional similarity amongst homoeologous loci. We previously developed a high-throughput method to identify deletions of genes within a physically mutagenized wheat population. Here we describe our efforts to combine multiple homoeologous deletions of three candidate disease susceptibility genes (TaWRKY11, TaPFT1 and TaPLDß1). We were able to produce lines featuring homozygous deletions at two of the three homoeoloci for all genes, but this was dependent on the individual mutants used in crossing. Intriguingly, despite extensive efforts, viable lines possessing homozygous deletions at all three homoeoloci could not be produced for any of the candidate genes. To investigate deletion size as a possible reason for this phenomenon, we developed an amplicon sequencing approach based on synteny to Brachypodium distachyon to assess the size of the deletions removing one candidate gene (TaPFT1) in our mutants. These analyses revealed that genomic deletions removing the locus are relatively large, resulting in the loss of multiple additional genes. The implications of this work for the use of heavy ion mutagenesis for reverse genetic analyses in wheat are discussed.
Fitzgerald, Timothy L.; Powell, Jonathan J.; Stiller, Jiri; Weese, Terri L.; Abe, Tomoko; Zhao, Guangyao; Jia, Jizeng; McIntyre, C. Lynne; Li, Zhongyi; Manners, John M.; Kazan, Kemal
2015-01-01
Reverse genetic techniques harnessing mutational approaches are powerful tools that can provide substantial insight into gene function in plants. However, as compared to diploid species, reverse genetic analyses in polyploid plants such as bread wheat can present substantial challenges associated with high levels of sequence and functional similarity amongst homoeologous loci. We previously developed a high-throughput method to identify deletions of genes within a physically mutagenized wheat population. Here we describe our efforts to combine multiple homoeologous deletions of three candidate disease susceptibility genes (TaWRKY11, TaPFT1 and TaPLDß1). We were able to produce lines featuring homozygous deletions at two of the three homoeoloci for all genes, but this was dependent on the individual mutants used in crossing. Intriguingly, despite extensive efforts, viable lines possessing homozygous deletions at all three homoeoloci could not be produced for any of the candidate genes. To investigate deletion size as a possible reason for this phenomenon, we developed an amplicon sequencing approach based on synteny to Brachypodium distachyon to assess the size of the deletions removing one candidate gene (TaPFT1) in our mutants. These analyses revealed that genomic deletions removing the locus are relatively large, resulting in the loss of multiple additional genes. The implications of this work for the use of heavy ion mutagenesis for reverse genetic analyses in wheat are discussed. PMID:25719507
Exhaled Breath Markers for Nonimaging and Noninvasive Measures for Detection of Multiple Sclerosis.
Broza, Yoav Y; Har-Shai, Lior; Jeries, Raneen; Cancilla, John C; Glass-Marmor, Lea; Lejbkowicz, Izabella; Torrecilla, José S; Yao, Xuelin; Feng, Xinliang; Narita, Akimitsu; Müllen, Klaus; Miller, Ariel; Haick, Hossam
2017-11-15
Multiple sclerosis (MS) is the most common chronic neurological disease affecting young adults. MS diagnosis is based on clinical characteristics and confirmed by examination of the cerebrospinal fluids (CSF) or by magnetic resonance imaging (MRI) of the brain or spinal cord or both. However, neither of the current diagnostic procedures are adequate as a routine tool to determine disease state. Thus, diagnostic biomarkers are needed. In the current study, a novel approach that could meet these expectations is presented. The approach is based on noninvasive analysis of volatile organic compounds (VOCs) in breath. Exhaled breath was collected from 204 participants, 146 MS and 58 healthy control individuals. Analysis was performed by gas-chromatography mass-spectrometry (GC-MS) and nanomaterial-based sensor array. Predictive models were derived from the sensors, using artificial neural networks (ANNs). GC-MS analysis revealed significant differences in VOC abundance between MS patients and controls. Sensor data analysis on training sets was able to discriminate in binary comparisons between MS patients and controls with accuracies up to 90%. Blinded sets showed 95% positive predictive value (PPV) between MS-remission and control, 100% sensitivity with 100% negative predictive value (NPV) between MS not-treated (NT) and control, and 86% NPV between relapse and control. Possible links between VOC biomarkers and the MS pathogenesis were established. Preliminary results suggest the applicability of a new nanotechnology-based method for MS diagnostics.
A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets
NASA Astrophysics Data System (ADS)
JafarGandomi, Arash; Binley, Andrew
2013-09-01
We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.
Multilevel resistive information storage and retrieval
Lohn, Andrew; Mickel, Patrick R.
2016-08-09
The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified information retrieval.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
The separate universe approach to soft limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, Zachary; Mulryne, David J., E-mail: z.a.kenton@qmul.ac.uk, E-mail: d.mulryne@qmul.ac.uk
We develop a formalism for calculating soft limits of n -point inflationary correlation functions using separate universe techniques. Our method naturally allows for multiple fields and leads to an elegant diagrammatic approach. As an application we focus on the trispectrum produced by inflation with multiple light fields, giving explicit formulae for all possible single- and double-soft limits. We also investigate consistency relations and present an infinite tower of inequalities between soft correlation functions which generalise the Suyama-Yamaguchi inequality.
Hierarchical screening for multiple mental disorders.
Batterham, Philip J; Calear, Alison L; Sunderland, Matthew; Carragher, Natacha; Christensen, Helen; Mackinnon, Andrew J
2013-10-01
There is a need for brief, accurate screening when assessing multiple mental disorders. Two-stage hierarchical screening, consisting of brief pre-screening followed by a battery of disorder-specific scales for those who meet diagnostic criteria, may increase the efficiency of screening without sacrificing precision. This study tested whether more efficient screening could be gained using two-stage hierarchical screening than by administering multiple separate tests. Two Australian adult samples (N=1990) with high rates of psychopathology were recruited using Facebook advertising to examine four methods of hierarchical screening for four mental disorders: major depressive disorder, generalised anxiety disorder, panic disorder and social phobia. Using K6 scores to determine whether full screening was required did not increase screening efficiency. However, pre-screening based on two decision tree approaches or item gating led to considerable reductions in the mean number of items presented per disorder screened, with estimated item reductions of up to 54%. The sensitivity of these hierarchical methods approached 100% relative to the full screening battery. Further testing of the hierarchical screening approach based on clinical criteria and in other samples is warranted. The results demonstrate that a two-phase hierarchical approach to screening multiple mental disorders leads to considerable increases efficiency gains without reducing accuracy. Screening programs should take advantage of prescreeners based on gating items or decision trees to reduce the burden on respondents. © 2013 Elsevier B.V. All rights reserved.
Zhe, Shandian; Xu, Zenglin; Qi, Yuan; Yu, Peng
2014-01-01
A key step for Alzheimer's disease (AD) study is to identify associations between genetic variations and intermediate phenotypes (e.g., brain structures). At the same time, it is crucial to develop a noninvasive means for AD diagnosis. Although these two tasks-association discovery and disease diagnosis-have been treated separately by a variety of approaches, they are tightly coupled due to their common biological basis. We hypothesize that the two tasks can potentially benefit each other by a joint analysis, because (i) the association study discovers correlated biomarkers from different data sources, which may help improve diagnosis accuracy, and (ii) the disease status may help identify disease-sensitive associations between genetic variations and MRI features. Based on this hypothesis, we present a new sparse Bayesian approach for joint association study and disease diagnosis. In this approach, common latent features are extracted from different data sources based on sparse projection matrices and used to predict multiple disease severity levels based on Gaussian process ordinal regression; in return, the disease status is used to guide the discovery of relationships between the data sources. The sparse projection matrices not only reveal the associations but also select groups of biomarkers related to AD. To learn the model from data, we develop an efficient variational expectation maximization algorithm. Simulation results demonstrate that our approach achieves higher accuracy in both predicting ordinal labels and discovering associations between data sources than alternative methods. We apply our approach to an imaging genetics dataset of AD. Our joint analysis approach not only identifies meaningful and interesting associations between genetic variations, brain structures, and AD status, but also achieves significantly higher accuracy for predicting ordinal AD stages than the competing methods.
Unbiased Characterization of Anopheles Mosquito Blood Meals by Targeted High-Throughput Sequencing
Logue, Kyle; Keven, John Bosco; Cannon, Matthew V.; Reimer, Lisa; Siba, Peter; Walker, Edward D.; Zimmerman, Peter A.; Serre, David
2016-01-01
Understanding mosquito host choice is important for assessing vector competence or identifying disease reservoirs. Unfortunately, the availability of an unbiased method for comprehensively evaluating the composition of insect blood meals is very limited, as most current molecular assays only test for the presence of a few pre-selected species. These approaches also have limited ability to identify the presence of multiple mammalian hosts in a single blood meal. Here, we describe a novel high-throughput sequencing method that enables analysis of 96 mosquitoes simultaneously and provides a comprehensive and quantitative perspective on the composition of each blood meal. We validated in silico that universal primers targeting the mammalian mitochondrial 16S ribosomal RNA genes (16S rRNA) should amplify more than 95% of the mammalian 16S rRNA sequences present in the NCBI nucleotide database. We applied this method to 442 female Anopheles punctulatus s. l. mosquitoes collected in Papua New Guinea (PNG). While human (52.9%), dog (15.8%) and pig (29.2%) were the most common hosts identified in our study, we also detected DNA from mice, one marsupial species and two bat species. Our analyses also revealed that 16.3% of the mosquitoes fed on more than one host. Analysis of the human mitochondrial hypervariable region I in 102 human blood meals showed that 5 (4.9%) of the mosquitoes unambiguously fed on more than one person. Overall, analysis of PNG mosquitoes illustrates the potential of this approach to identify unsuspected hosts and characterize mixed blood meals, and shows how this approach can be adapted to evaluate inter-individual variations among human blood meals. Furthermore, this approach can be applied to any disease-transmitting arthropod and can be easily customized to investigate non-mammalian host sources. PMID:26963245
Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R
2016-11-16
Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.
Imputation method for lifetime exposure assessment in air pollution epidemiologic studies
2013-01-01
Background Environmental epidemiology, when focused on the life course of exposure to a specific pollutant, requires historical exposure estimates that are difficult to obtain for the full time period due to gaps in the historical record, especially in earlier years. We show that these gaps can be filled by applying multiple imputation methods to a formal risk equation that incorporates lifetime exposure. We also address challenges that arise, including choice of imputation method, potential bias in regression coefficients, and uncertainty in age-at-exposure sensitivities. Methods During time periods when parameters needed in the risk equation are missing for an individual, the parameters are filled by an imputation model using group level information or interpolation. A random component is added to match the variance found in the estimates for study subjects not needing imputation. The process is repeated to obtain multiple data sets, whose regressions against health data can be combined statistically to develop confidence limits using Rubin’s rules to account for the uncertainty introduced by the imputations. To test for possible recall bias between cases and controls, which can occur when historical residence location is obtained by interview, and which can lead to misclassification of imputed exposure by disease status, we introduce an “incompleteness index,” equal to the percentage of dose imputed (PDI) for a subject. “Effective doses” can be computed using different functional dependencies of relative risk on age of exposure, allowing intercomparison of different risk models. To illustrate our approach, we quantify lifetime exposure (dose) from traffic air pollution in an established case–control study on Long Island, New York, where considerable in-migration occurred over a period of many decades. Results The major result is the described approach to imputation. The illustrative example revealed potential recall bias, suggesting that regressions against health data should be done as a function of PDI to check for consistency of results. The 1% of study subjects who lived for long durations near heavily trafficked intersections, had very high cumulative exposures. Thus, imputation methods must be designed to reproduce non-standard distributions. Conclusions Our approach meets a number of methodological challenges to extending historical exposure reconstruction over a lifetime and shows promise for environmental epidemiology. Application to assessment of breast cancer risks will be reported in a subsequent manuscript. PMID:23919666
Criteria for quantitative and qualitative data integration: mixed-methods research methodology.
Lee, Seonah; Smith, Carrol A M
2012-05-01
Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.
Sociotechnical Analysis of Health Information Exchange Consent Processes in an HIV Clinic.
Ramos, S Raquel; Gordon, Peter; Bakken, Suzanne; Schnall, Rebecca
Federal regulations have encouraged the electronic sharing of protected health information (PHI). As an opt-in state, New York abides by an affirmative consent model where PHI is electronically shared only after written consent is obtained. The purpose of our study was to describe sociotechnical factors that influence health information exchange (HIE) consent for persons living with HIV (PLWH) at one clinic in New York City. We employed mixed methods to gather perceptions of facilitators and barriers to HIE consent. Study participants included PLWH, staff, and clinicians. The mixed-methods approach revealed multiple interruptions in clinical workflow, staff and providers' time constraints, and lack of dedicated personnel focused on HIE consent as the major barriers to HIE consent. Although there is no one strategy to resolve barriers to HIE consent, having a dedicated person was identified as the most salient factor for facilitating HIE consent. Copyright © 2016 Association of Nurses in AIDS Care. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lien, Chi-Hsiang; Lin, Chun-Yu; Chen, Shean-Jen; Chien, Fan-Ching
2017-02-01
A three-dimensional (3D) single fluorescent particle tracking strategy based on temporal focusing multiphoton excitation microscopy (TFMPEM) combined with astigmatism imaging is proposed for delivering nanoscale-level axial information that reveals 3D trajectories of single fluorospheres in the axially-resolved multiphoton excitation volume without z-axis scanning. It provides the dynamical ability by measuring the diffusion coefficient of fluorospheres in glycerol solutions with a position standard deviation of 14 nm and 21 nm in the lateral and axial direction and a frame rate of 100 Hz. Moreover, the optical trapping force based on the TFMPEM is minimized to avoid the interference in the tracing measurements compared to that in the spatial focusing MPE approaches. Therefore, we presented a three dimensional single particle tracking strategy to overcome the limitation of the time resolution of the multiphoton imaging using fast frame rate of TFMPEM, and provide three dimensional locations of multiple particles using an astigmatism method.
Ensemble analyses improve signatures of tumour hypoxia and reveal inter-platform differences
2014-01-01
Background The reproducibility of transcriptomic biomarkers across datasets remains poor, limiting clinical application. We and others have suggested that this is in-part caused by differential error-structure between datasets, and their incomplete removal by pre-processing algorithms. Methods To test this hypothesis, we systematically assessed the effects of pre-processing on biomarker classification using 24 different pre-processing methods and 15 distinct signatures of tumour hypoxia in 10 datasets (2,143 patients). Results We confirm strong pre-processing effects for all datasets and signatures, and find that these differ between microarray versions. Importantly, exploiting different pre-processing techniques in an ensemble technique improved classification for a majority of signatures. Conclusions Assessing biomarkers using an ensemble of pre-processing techniques shows clear value across multiple diseases, datasets and biomarkers. Importantly, ensemble classification improves biomarkers with initially good results but does not result in spuriously improved performance for poor biomarkers. While further research is required, this approach has the potential to become a standard for transcriptomic biomarkers. PMID:24902696
Systematic analysis of molecular mechanisms for HCC metastasis via text mining approach.
Zhen, Cheng; Zhu, Caizhong; Chen, Haoyang; Xiong, Yiru; Tan, Junyuan; Chen, Dong; Li, Jin
2017-02-21
To systematically explore the molecular mechanism for hepatocellular carcinoma (HCC) metastasis and identify regulatory genes with text mining methods. Genes with highest frequencies and significant pathways related to HCC metastasis were listed. A handful of proteins such as EGFR, MDM2, TP53 and APP, were identified as hub nodes in PPI (protein-protein interaction) network. Compared with unique genes for HBV-HCCs, genes particular to HCV-HCCs were less, but may participate in more extensive signaling processes. VEGFA, PI3KCA, MAPK1, MMP9 and other genes may play important roles in multiple phenotypes of metastasis. Genes in abstracts of HCC-metastasis literatures were identified. Word frequency analysis, KEGG pathway and PPI network analysis were performed. Then co-occurrence analysis between genes and metastasis-related phenotypes were carried out. Text mining is effective for revealing potential regulators or pathways, but the purpose of it should be specific, and the combination of various methods will be more useful.
Ron, Gil; Globerson, Yuval; Moran, Dror; Kaplan, Tommy
2017-12-21
Proximity-ligation methods such as Hi-C allow us to map physical DNA-DNA interactions along the genome, and reveal its organization into topologically associating domains (TADs). As the Hi-C data accumulate, computational methods were developed for identifying domain borders in multiple cell types and organisms. Here, we present PSYCHIC, a computational approach for analyzing Hi-C data and identifying promoter-enhancer interactions. We use a unified probabilistic model to segment the genome into domains, which we then merge hierarchically and fit using a local background model, allowing us to identify over-represented DNA-DNA interactions across the genome. By analyzing the published Hi-C data sets in human and mouse, we identify hundreds of thousands of putative enhancers and their target genes, and compile an extensive genome-wide catalog of gene regulation in human and mouse. As we show, our predictions are highly enriched for ChIP-seq and DNA accessibility data, evolutionary conservation, eQTLs and other DNA-DNA interaction data.
A data fusion approach for track monitoring from multiple in-service trains
NASA Astrophysics Data System (ADS)
Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo
2017-10-01
We present a data fusion approach for enabling data-driven rail-infrastructure monitoring from multiple in-service trains. A number of researchers have proposed using vibration data collected from in-service trains as a low-cost method to monitor track geometry. The majority of this work has focused on developing novel features to extract information about the tracks from data produced by individual sensors on individual trains. We extend this work by presenting a technique to combine extracted features from multiple passes over the tracks from multiple sensors aboard multiple vehicles. There are a number of challenges in combining multiple data sources, like different relative position coordinates depending on the location of the sensor within the train. Furthermore, as the number of sensors increases, the likelihood that some will malfunction also increases. We use a two-step approach that first minimizes position offset errors through data alignment, then fuses the data with a novel adaptive Kalman filter that weights data according to its estimated reliability. We show the efficacy of this approach both through simulations and on a data-set collected from two instrumented trains operating over a one-year period. Combining data from numerous in-service trains allows for more continuous and more reliable data-driven monitoring than analyzing data from any one train alone; as the number of instrumented trains increases, the proposed fusion approach could facilitate track monitoring of entire rail-networks.
ERIC Educational Resources Information Center
Gaffey, Abigail R.; Rottinghaus, Patrick J.
2009-01-01
Work-family conflict (WFC) has been examined from a unidimensional approach, yet recent research has revealed three types (i.e., time, strain, and behavior) and two directions of work-family conflict. Previous researchers suggested that college students are unable to discern between the multiple-facets of WFC, thus measured anticipated WFC…
Understanding Communities of Neglectful Parents: Child Caregiving Networks and Child Neglect
ERIC Educational Resources Information Center
Roditti, Martha G.
2005-01-01
This article focuses on family social networks and the community of caregivers of neglected children. If neglect is part of family functioning, who watches over the children? Using a case study approach, this study researched 12 children and their parents. Several concepts, such as multiple caregiving and kin keepers, revealed that study children…
Learning and Growing: Trust, Leadership, and Response to Crisis
ERIC Educational Resources Information Center
Sutherland, Ian E.
2017-01-01
Purpose: The purpose of this paper is to explore the nature of trust in a school community related to the leadership response to crisis. Design/Methodology/Approach: This study was a multiple-source qualitative study of a single case of a PreK-12 international school called The Learning School. Findings: The findings revealed the nature of how…
Schiebel, Johannes; Radeva, Nedyalka; Köster, Helene; Metz, Alexander; Krotzky, Timo; Kuhnert, Maren; Diederich, Wibke E; Heine, Andreas; Neumann, Lars; Atmanene, Cedric; Roecklin, Dominique; Vivat-Hannah, Valérie; Renaud, Jean-Paul; Meinecke, Robert; Schlinck, Nina; Sitte, Astrid; Popp, Franziska; Zeeb, Markus; Klebe, Gerhard
2015-09-01
Fragment-based lead discovery is gaining momentum in drug development. Typically, a hierarchical cascade of several screening techniques is consulted to identify fragment hits which are then analyzed by crystallography. Because crystal structures with bound fragments are essential for the subsequent hit-to-lead-to-drug optimization, the screening process should distinguish reliably between binders and non-binders. We therefore investigated whether different screening methods would reveal similar collections of putative binders. First we used a biochemical assay to identify fragments that bind to endothiapepsin, a surrogate for disease-relevant aspartic proteases. In a comprehensive screening approach, we then evaluated our 361-entry library by using a reporter-displacement assay, saturation-transfer difference NMR, native mass spectrometry, thermophoresis, and a thermal shift assay. While the combined results of these screening methods retrieve 10 of the 11 crystal structures originally predicted by the biochemical assay, the mutual overlap of individual hit lists is surprisingly low, highlighting that each technique operates on different biophysical principles and conditions. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Du, Pufeng; Wang, Lusheng
2014-01-01
One of the fundamental tasks in biology is to identify the functions of all proteins to reveal the primary machinery of a cell. Knowledge of the subcellular locations of proteins will provide key hints to reveal their functions and to understand the intricate pathways that regulate biological processes at the cellular level. Protein subcellular location prediction has been extensively studied in the past two decades. A lot of methods have been developed based on protein primary sequences as well as protein-protein interaction network. In this paper, we propose to use the protein-protein interaction network as an infrastructure to integrate existing sequence based predictors. When predicting the subcellular locations of a given protein, not only the protein itself, but also all its interacting partners were considered. Unlike existing methods, our method requires neither the comprehensive knowledge of the protein-protein interaction network nor the experimentally annotated subcellular locations of most proteins in the protein-protein interaction network. Besides, our method can be used as a framework to integrate multiple predictors. Our method achieved 56% on human proteome in absolute-true rate, which is higher than the state-of-the-art methods. PMID:24466278
Categorical Variables in Multiple Regression: Some Cautions.
ERIC Educational Resources Information Center
O'Grady, Kevin E.; Medoff, Deborah R.
1988-01-01
Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)
Clinical Practice Improvement Approach in Multiple Sclerosis Rehabilitation: A Pilot Study
ERIC Educational Resources Information Center
Khan, Fary
2010-01-01
The objective of this study was to explore methods examining patient complexity and therapy interventions in relation to functional outcomes from an inpatient multiple sclerosis (MS) rehabilitation program. Retrospective and prospective data for 24 consecutive inpatients at a tertiary rehabilitation facility assessed (i)…
Detection of bifurcations in noisy coupled systems from multiple time series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Mark S., E-mail: m.s.williamson@exeter.ac.uk; Lenton, Timothy M.
We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, themore » possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.« less
Detection of bifurcations in noisy coupled systems from multiple time series
NASA Astrophysics Data System (ADS)
Williamson, Mark S.; Lenton, Timothy M.
2015-03-01
We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.
Vu, Dai Long; Ranglová, Karolína; Hájek, Jan; Hrouzek, Pavel
2018-05-01
Quantification of selenated amino-acids currently relies on methods employing inductively coupled plasma mass spectrometry (ICP-MS). Although very accurate, these methods do not allow the simultaneous determination of standard amino-acids, hampering the comparison of the content of selenated versus non-selenated species such as methionine (Met) and selenomethionine (SeMet). This paper reports two approaches for the simultaneous quantification of Met and SeMet. In the first approach, standard enzymatic hydrolysis employing Protease XIV was applied for the preparation of samples. The second approach utilized methanesulfonic acid (MA) for the hydrolysis of samples, either in a reflux system or in a microwave oven, followed by derivatization with diethyl ethoxymethylenemalonate. The prepared samples were then analyzed by multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS). Both approaches provided platforms for the accurate determination of selenium/sulfur substitution rate in Met. Moreover the second approach also provided accurate simultaneous quantification of Met and SeMet with a low limit of detection, low limit of quantification and wide linearity range, comparable to the commonly used gas chromatography mass spectrometry (GC-MS) method or ICP-MS. The novel method was validated using certified reference material in conjunction with the GC-MS reference method. Copyright © 2018. Published by Elsevier B.V.
ERIC Educational Resources Information Center
ALTMANN, BERTHOLD; BROWN, WILLIAM G.
THE FIRST-GENERATION APPROACH BY CONCEPT (ABC) STORAGE AND RETRIEVAL METHOD, A METHOD WHICH UTILIZES AS A SUBJECT APPROACH APPROPRIATE STANDARDIZED ENGLISH-LANGUAGE STATEMENTS PROCESSED AND PRINTED IN A PERMUTED INDEX FORMAT, UNDERWENT A PERFORMANCE TEST, THE PRIMARY OBJECTIVE OF WHICH WAS TO SPOT DEFICIENCIES AND TO DEVELOP A SECOND-GENERATION…
Kobayashi, Keigo; Naoki, Katsuhiko; Kuroda, Aoi; Yasuda, Hiroyuki; Kawada, Ichiro; Soejima, Kenzo; Betsuyaku, Tomoko
2018-04-01
A 69-year-old man with post-operative recurrence of lung adenocarcinoma was treated with multiple chemotherapies, including epidermal growth factor receptor (EGFR)-tyrosine kinase inhibitors. A second biopsy revealed an EGFR T790M mutation. As 10th-line chemotherapy, osimertinib was initiated. After 24 weeks, chest computed tomography (CT) revealed asymptomatic ground-glass opacities in both lobes. After four weeks of osimertinib discontinuation, imaging revealed rapid lung cancer progression. Osimertinib was resumed. After 11 weeks, CT revealed decreased lung nodules with no exacerbation of interstitial lung disease. We describe a patient who experienced transient asymptomatic pulmonary opacities during treatment with osimertinib, which was successfully managed by a "stop-and-go" approach.
Alcohol-abuse drug disulfiram targets cancer via p97 segregase adaptor NPL4.
Skrott, Zdenek; Mistrik, Martin; Andersen, Klaus Kaae; Friis, Søren; Majera, Dusana; Gursky, Jan; Ozdian, Tomas; Bartkova, Jirina; Turi, Zsofia; Moudry, Pavel; Kraus, Marianne; Michalova, Martina; Vaclavkova, Jana; Dzubak, Petr; Vrobel, Ivo; Pouckova, Pavla; Sedlacek, Jindrich; Miklovicova, Andrea; Kutt, Anne; Li, Jing; Mattova, Jana; Driessen, Christoph; Dou, Q Ping; Olsen, Jørgen; Hajduch, Marian; Cvek, Boris; Deshaies, Raymond J; Bartek, Jiri
2017-12-14
Cancer incidence is rising and this global challenge is further exacerbated by tumour resistance to available medicines. A promising approach to meet the need for improved cancer treatment is drug repurposing. Here we highlight the potential for repurposing disulfiram (also known by the trade name Antabuse), an old alcohol-aversion drug that has been shown to be effective against diverse cancer types in preclinical studies. Our nationwide epidemiological study reveals that patients who continuously used disulfiram have a lower risk of death from cancer compared to those who stopped using the drug at their diagnosis. Moreover, we identify the ditiocarb-copper complex as the metabolite of disulfiram that is responsible for its anti-cancer effects, and provide methods to detect preferential accumulation of the complex in tumours and candidate biomarkers to analyse its effect on cells and tissues. Finally, our functional and biophysical analyses reveal the molecular target of disulfiram's tumour-suppressing effects as NPL4, an adaptor of p97 (also known as VCP) segregase, which is essential for the turnover of proteins involved in multiple regulatory and stress-response pathways in cells.
Impact of environmental inputs on reverse-engineering approach to network structures.
Wu, Jianhua; Sinfield, James L; Buchanan-Wollaston, Vicky; Feng, Jianfeng
2009-12-04
Uncovering complex network structures from a biological system is one of the main topic in system biology. The network structures can be inferred by the dynamical Bayesian network or Granger causality, but neither techniques have seriously taken into account the impact of environmental inputs. With considerations of natural rhythmic dynamics of biological data, we propose a system biology approach to reveal the impact of environmental inputs on network structures. We first represent the environmental inputs by a harmonic oscillator and combine them with Granger causality to identify environmental inputs and then uncover the causal network structures. We also generalize it to multiple harmonic oscillators to represent various exogenous influences. This system approach is extensively tested with toy models and successfully applied to a real biological network of microarray data of the flowering genes of the model plant Arabidopsis Thaliana. The aim is to identify those genes that are directly affected by the presence of the sunlight and uncover the interactive network structures associating with flowering metabolism. We demonstrate that environmental inputs are crucial for correctly inferring network structures. Harmonic causal method is proved to be a powerful technique to detect environment inputs and uncover network structures, especially when the biological data exhibit periodic oscillations.
NASA Astrophysics Data System (ADS)
Chen, Xinying
2014-12-01
Researchers have been talking about the language system theoretically for many years [1]. A well accepted assumption is that language is a complex adaptive system [2] which is hierarchical [3] and contains multiple levels along the meaning-form dimension [4]. Over the last decade or so, driven by the availability of digital language data and the popularity of statistical approach, many researchers interested in theoretical questions have started to try to quantitatively describe microscopic linguistic features in a certain level of a language system by using authentic language data. Despite the fruitful findings, one question remains unclear. That is, how does a whole language system look like? For answering this question, network approach, an analysis method emphasizes the macro features of structures, has been introduced into linguistic studies [5]. By analyzing the static and dynamic linguistics networks constructed from authentic language data, many macro and micro linguistic features, such as lexical, syntactic or semantic features have been discovered and successfully applied in linguistic typographical studies so that the huge potential of linguistic networks research has revealed [6].
BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)
We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...
Shahriari, Mohammadali; Biglarbegian, Mohammad
2018-01-01
This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.
EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.
Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin
2018-04-24
Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.
Class Council between Democracy Learning and Character Education
ERIC Educational Resources Information Center
Budde, Jürgen; Weuster, Nora
2017-01-01
Purpose: Class council has become a popular approach for character education and democracy learning in German schools. However, it is not clear if the expectations are met in social practice. Approach: The data was gained with an ethnographical multiple method approach within three contrasting secondary schools. The study is informed by practice…
Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu
2014-12-30
Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.
Qendro, Veneta; Bugos, Grace A; Lundgren, Debbie H; Glynn, John; Han, May H; Han, David K
2017-03-01
In order to gain mechanistic insights into multiple sclerosis (MS) pathogenesis, we utilized a multi-dimensional approach to test the hypothesis that mutations in myelin proteins lead to immune activation and central nervous system autoimmunity in MS. Mass spectrometry-based proteomic analysis of human MS brain lesions revealed seven unique mutations of PLP1; a key myelin protein that is known to be destroyed in MS. Surprisingly, in-depth genomic analysis of two MS patients at the genomic DNA and mRNA confirmed mutated PLP1 in RNA, but not in the genomic DNA. Quantification of wild type and mutant PLP RNA levels by qPCR further validated the presence of mutant PLP RNA in the MS patients. To seek evidence linking mutations in abundant myelin proteins and immune-mediated destruction of myelin, specific immune response against mutant PLP1 in MS patients was examined. Thus, we have designed paired, wild type and mutant peptide microarrays, and examined antibody response to multiple mutated PLP1 in sera from MS patients. Consistent with the idea of different patients exhibiting unique mutation profiles, we found that 13 out of 20 MS patients showed antibody responses against specific but not against all the mutant-PLP1 peptides. Interestingly, we found mutant PLP-directed antibody response against specific mutant peptides in the sera of pre-MS controls. The results from integrative proteomic, genomic, and immune analyses reveal a possible mechanism of mutation-driven pathogenesis in human MS. The study also highlights the need for integrative genomic and proteomic analyses for uncovering pathogenic mechanisms of human diseases. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analysis of Genome-Wide Association Studies with Multiple Outcomes Using Penalization
Liu, Jin; Huang, Jian; Ma, Shuangge
2012-01-01
Genome-wide association studies have been extensively conducted, searching for markers for biologically meaningful outcomes and phenotypes. Penalization methods have been adopted in the analysis of the joint effects of a large number of SNPs (single nucleotide polymorphisms) and marker identification. This study is partly motivated by the analysis of heterogeneous stock mice dataset, in which multiple correlated phenotypes and a large number of SNPs are available. Existing penalization methods designed to analyze a single response variable cannot accommodate the correlation among multiple response variables. With multiple response variables sharing the same set of markers, joint modeling is first employed to accommodate the correlation. The group Lasso approach is adopted to select markers associated with all the outcome variables. An efficient computational algorithm is developed. Simulation study and analysis of the heterogeneous stock mice dataset show that the proposed method can outperform existing penalization methods. PMID:23272092
ERIC Educational Resources Information Center
Busch, Vincent; de Leeuw, Johannes Rob Josephus; de Harder, Alinda; Schrijvers, Augustinus Jacobus Petrus
2013-01-01
Background: In approaches to health promotion in adolescents, unhealthy behaviors are no longer regarded as independent processes, but as interrelated. This article presents a systematic literature review of school-based interventions targeting multiple adolescent behaviors simultaneously. Methods: A systematic literature search was performed…
Jalava, Katri; Rintala, Hanna; Ollgren, Jukka; Maunula, Leena; Gomez-Alvarez, Vicente; Revez, Joana; Palander, Marja; Antikainen, Jenni; Kauppinen, Ari; Räsänen, Pia; Siponen, Sallamaari; Nyholm, Outi; Kyyhkynen, Aino; Hakkarainen, Sirpa; Merentie, Juhani; Pärnänen, Martti; Loginov, Raisa; Ryu, Hodon; Kuusi, Markku; Siitonen, Anja; Miettinen, Ilkka; Santo Domingo, Jorge W.; Hänninen, Marja-Liisa; Pitkänen, Tarja
2014-01-01
Failures in the drinking water distribution system cause gastrointestinal outbreaks with multiple pathogens. A water distribution pipe breakage caused a community-wide waterborne outbreak in Vuorela, Finland, July 2012. We investigated this outbreak with advanced epidemiological and microbiological methods. A total of 473/2931 inhabitants (16%) responded to a web-based questionnaire. Water and patient samples were subjected to analysis of multiple microbial targets, molecular typing and microbial community analysis. Spatial analysis on the water distribution network was done and we applied a spatial logistic regression model. The course of the illness was mild. Drinking untreated tap water from the defined outbreak area was significantly associated with illness (RR 5.6, 95% CI 1.9–16.4) increasing in a dose response manner. The closer a person lived to the water distribution breakage point, the higher the risk of becoming ill. Sapovirus, enterovirus, single Campylobacter jejuni and EHEC O157:H7 findings as well as virulence genes for EPEC, EAEC and EHEC pathogroups were detected by molecular or culture methods from the faecal samples of the patients. EPEC, EAEC and EHEC virulence genes and faecal indicator bacteria were also detected in water samples. Microbial community sequencing of contaminated tap water revealed abundance of Arcobacter species. The polyphasic approach improved the understanding of the source of the infections, and aided to define the extent and magnitude of this outbreak. PMID:25147923
NASA Astrophysics Data System (ADS)
Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Barzegar, Rahim; Moghaddam, Asghar Asghari; Deo, Ravinesh; Fijani, Elham; Tziritis, Evangelos
2018-04-15
Constructing accurate and reliable groundwater risk maps provide scientifically prudent and strategic measures for the protection and management of groundwater. The objectives of this paper are to design and validate machine learning based-risk maps using ensemble-based modelling with an integrative approach. We employ the extreme learning machines (ELM), multivariate regression splines (MARS), M5 Tree and support vector regression (SVR) applied in multiple aquifer systems (e.g. unconfined, semi-confined and confined) in the Marand plain, North West Iran, to encapsulate the merits of individual learning algorithms in a final committee-based ANN model. The DRASTIC Vulnerability Index (VI) ranged from 56.7 to 128.1, categorized with no risk, low and moderate vulnerability thresholds. The correlation coefficient (r) and Willmott's Index (d) between NO 3 concentrations and VI were 0.64 and 0.314, respectively. To introduce improvements in the original DRASTIC method, the vulnerability indices were adjusted by NO 3 concentrations, termed as the groundwater contamination risk (GCR). Seven DRASTIC parameters utilized as the model inputs and GCR values utilized as the outputs of individual machine learning models were served in the fully optimized committee-based ANN-predictive model. The correlation indicators demonstrated that the ELM and SVR models outperformed the MARS and M5 Tree models, by virtue of a larger d and r value. Subsequently, the r and d metrics for the ANN-committee based multi-model in the testing phase were 0.8889 and 0.7913, respectively; revealing the superiority of the integrated (or ensemble) machine learning models when compared with the original DRASTIC approach. The newly designed multi-model ensemble-based approach can be considered as a pragmatic step for mapping groundwater contamination risks of multiple aquifer systems with multi-model techniques, yielding the high accuracy of the ANN committee-based model. Copyright © 2017 Elsevier B.V. All rights reserved.
Narr, Anja; Nawaz, Ali; Wick, Lukas Y.; Harms, Hauke; Chatzinotas, Antonis
2017-01-01
Environmental surveys on soil viruses are still rare and mostly anecdotal, i. e., they mostly report on viruses at one location or for only a few sampling dates. Detailed time-series analysis with multiple samples can reveal the spatio-temporal dynamics of viral communities and provide important input as to how viruses interact with their potential hosts and the environment. Such surveys, however, require fast, easy-to-apply and reliable methods. In the present study we surveyed monthly across 13 months the abundance of virus-like particles (VLP) and the structure of the viral communities in soils along a land use transect (i.e., forest, pasture, and cropland). We evaluated 32 procedures to extract VLP from soil using different buffers and mechanical methods. The most efficient extraction was achieved with 1× saline magnesium buffer in combination with 20 min vortexing. For community structure analysis we developed an optimized fingerprinting approach (fluorescent RAPD-PCR; fRAPD) by combining RAPD-PCR with fluorescently labeled primers in order to size the obtained fragments on a capillary sequencing machine. With the concomitantly collected data of soil specific factors and weather data, we were able to find correlations of viral abundance and community structure with environmental variables and sampling site. More specifically, we found that soil specific factors such as pH and total nitrogen content played a significant role in shaping both soil viral abundance and community structure. The fRAPD analysis revealed high temporal changes and clustered the viral communities according to sampling sites. In particular we observed that temperature and rainfall shaped soil viral communities in non-forest sites. In summary our findings suggest that sampling site was a key factor for shaping the abundance and community structure of soil viruses, and when site vegetation was reduced, temperature and rainfall were also important factors. PMID:29067022
NASA Astrophysics Data System (ADS)
Drwal, Malgorzata N.; Agama, Keli; Pommier, Yves; Griffith, Renate
2013-12-01
Purely structure-based pharmacophores (SBPs) are an alternative method to ligand-based approaches and have the advantage of describing the entire interaction capability of a binding pocket. Here, we present the development of SBPs for topoisomerase I, an anticancer target with an unusual ligand binding pocket consisting of protein and DNA atoms. Different approaches to cluster and select pharmacophore features are investigated, including hierarchical clustering and energy calculations. In addition, the performance of SBPs is evaluated retrospectively and compared to the performance of ligand- and complex-based pharmacophores. SBPs emerge as a valid method in virtual screening and a complementary approach to ligand-focussed methods. The study further reveals that the choice of pharmacophore feature clustering and selection methods has a large impact on the virtual screening hit lists. A prospective application of the SBPs in virtual screening reveals that they can be used successfully to identify novel topoisomerase inhibitors.
MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL
Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi
2017-01-01
A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively. PMID:29051701
MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.
Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi
2017-01-01
A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.
Timescale analysis of rule-based biochemical reaction networks
Klinke, David J.; Finley, Stacey D.
2012-01-01
The flow of information within a cell is governed by a series of protein-protein interactions that can be described as a reaction network. Mathematical models of biochemical reaction networks can be constructed by repetitively applying specific rules that define how reactants interact and what new species are formed upon reaction. To aid in understanding the underlying biochemistry, timescale analysis is one method developed to prune the size of the reaction network. In this work, we extend the methods associated with timescale analysis to reaction rules instead of the species contained within the network. To illustrate this approach, we applied timescale analysis to a simple receptor-ligand binding model and a rule-based model of Interleukin-12 (IL-12) signaling in näive CD4+ T cells. The IL-12 signaling pathway includes multiple protein-protein interactions that collectively transmit information; however, the level of mechanistic detail sufficient to capture the observed dynamics has not been justified based upon the available data. The analysis correctly predicted that reactions associated with JAK2 and TYK2 binding to their corresponding receptor exist at a pseudo-equilibrium. In contrast, reactions associated with ligand binding and receptor turnover regulate cellular response to IL-12. An empirical Bayesian approach was used to estimate the uncertainty in the timescales. This approach complements existing rank- and flux-based methods that can be used to interrogate complex reaction networks. Ultimately, timescale analysis of rule-based models is a computational tool that can be used to reveal the biochemical steps that regulate signaling dynamics. PMID:21954150
Jiménez, Cristina; Jara-Acevedo, María; Corchete, Luis A; Castillo, David; Ordóñez, Gonzalo R; Sarasquete, María E; Puig, Noemí; Martínez-López, Joaquín; Prieto-Conde, María I; García-Álvarez, María; Chillón, María C; Balanzategui, Ana; Alcoceba, Miguel; Oriol, Albert; Rosiñol, Laura; Palomera, Luis; Teruel, Ana I; Lahuerta, Juan J; Bladé, Joan; Mateos, María V; Orfão, Alberto; San Miguel, Jesús F; González, Marcos; Gutiérrez, Norma C; García-Sanz, Ramón
2017-01-01
Identification and characterization of genetic alterations are essential for diagnosis of multiple myeloma and may guide therapeutic decisions. Currently, genomic analysis of myeloma to cover the diverse range of alterations with prognostic impact requires fluorescence in situ hybridization (FISH), single nucleotide polymorphism arrays, and sequencing techniques, which are costly and labor intensive and require large numbers of plasma cells. To overcome these limitations, we designed a targeted-capture next-generation sequencing approach for one-step identification of IGH translocations, V(D)J clonal rearrangements, the IgH isotype, and somatic mutations to rapidly identify risk groups and specific targetable molecular lesions. Forty-eight newly diagnosed myeloma patients were tested with the panel, which included IGH and six genes that are recurrently mutated in myeloma: NRAS, KRAS, HRAS, TP53, MYC, and BRAF. We identified 14 of 17 IGH translocations previously detected by FISH and three confirmed translocations not detected by FISH, with the additional advantage of breakpoint identification, which can be used as a target for evaluating minimal residual disease. IgH subclass and V(D)J rearrangements were identified in 77% and 65% of patients, respectively. Mutation analysis revealed the presence of missense protein-coding alterations in at least one of the evaluating genes in 16 of 48 patients (33%). This method may represent a time- and cost-effective diagnostic method for the molecular characterization of multiple myeloma. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Clustering multilayer omics data using MuNCut.
Teran Hidalgo, Sebastian J; Ma, Shuangge
2018-03-14
Omics profiling is now a routine component of biomedical studies. In the analysis of omics data, clustering is an essential step and serves multiple purposes including for example revealing the unknown functionalities of omics units, assisting dimension reduction in outcome model building, and others. In the most recent omics studies, a prominent trend is to conduct multilayer profiling, which collects multiple types of genetic, genomic, epigenetic and other measurements on the same subjects. In the literature, clustering methods tailored to multilayer omics data are still limited. Directly applying the existing clustering methods to multilayer omics data and clustering each layer first and then combing across layers are both "suboptimal" in that they do not accommodate the interconnections within layers and across layers in an informative way. In this study, we develop the MuNCut (Multilayer NCut) clustering approach. It is tailored to multilayer omics data and sufficiently accounts for both across- and within-layer connections. It is based on the novel NCut technique and also takes advantages of regularized sparse estimation. It has an intuitive formulation and is computationally very feasible. To facilitate implementation, we develop the function muncut in the R package NcutYX. Under a wide spectrum of simulation settings, it outperforms competitors. The analysis of TCGA (The Cancer Genome Atlas) data on breast cancer and cervical cancer shows that MuNCut generates biologically meaningful results which differ from those using the alternatives. We propose a more effective clustering analysis of multiple omics data. It provides a new venue for jointly analyzing genetic, genomic, epigenetic and other measurements.
Constraints on signaling network logic reveal functional subgraphs on Multiple Myeloma OMIC data.
Miannay, Bertrand; Minvielle, Stéphane; Magrangeas, Florence; Guziolowski, Carito
2018-03-21
The integration of gene expression profiles (GEPs) and large-scale biological networks derived from pathways databases is a subject which is being widely explored. Existing methods are based on network distance measures among significantly measured species. Only a small number of them include the directionality and underlying logic existing in biological networks. In this study we approach the GEP-networks integration problem by considering the network logic, however our approach does not require a prior species selection according to their gene expression level. We start by modeling the biological network representing its underlying logic using Logic Programming. This model points to reachable network discrete states that maximize a notion of harmony between the molecular species active or inactive possible states and the directionality of the pathways reactions according to their activator or inhibitor control role. Only then, we confront these network states with the GEP. From this confrontation independent graph components are derived, each of them related to a fixed and optimal assignment of active or inactive states. These components allow us to decompose a large-scale network into subgraphs and their molecular species state assignments have different degrees of similarity when compared to the same GEP. We apply our method to study the set of possible states derived from a subgraph from the NCI-PID Pathway Interaction Database. This graph links Multiple Myeloma (MM) genes to known receptors for this blood cancer. We discover that the NCI-PID MM graph had 15 independent components, and when confronted to 611 MM GEPs, we find 1 component as being more specific to represent the difference between cancer and healthy profiles.
Quantification of multiple gene expression in individual cells.
Peixoto, António; Monteiro, Marta; Rocha, Benedita; Veiga-Fernandes, Henrique
2004-10-01
Quantitative gene expression analysis aims to define the gene expression patterns determining cell behavior. So far, these assessments can only be performed at the population level. Therefore, they determine the average gene expression within a population, overlooking possible cell-to-cell heterogeneity that could lead to different cell behaviors/cell fates. Understanding individual cell behavior requires multiple gene expression analyses of single cells, and may be fundamental for the understanding of all types of biological events and/or differentiation processes. We here describe a new reverse transcription-polymerase chain reaction (RT-PCR) approach allowing the simultaneous quantification of the expression of 20 genes in the same single cell. This method has broad application, in different species and any type of gene combination. RT efficiency is evaluated. Uniform and maximized amplification conditions for all genes are provided. Abundance relationships are maintained, allowing the precise quantification of the absolute number of mRNA molecules per cell, ranging from 2 to 1.28 x 10(9) for each individual gene. We evaluated the impact of this approach on functional genetic read-outs by studying an apparently homogeneous population (monoclonal T cells recovered 4 d after antigen stimulation), using either this method or conventional real-time RT-PCR. Single-cell studies revealed considerable cell-to-cell variation: All T cells did not express all individual genes. Gene coexpression patterns were very heterogeneous. mRNA copy numbers varied between different transcripts and in different cells. As a consequence, this single-cell assay introduces new and fundamental information regarding functional genomic read-outs. By comparison, we also show that conventional quantitative assays determining population averages supply insufficient information, and may even be highly misleading.
Diving deeper into Zebrafish development of social behavior: analyzing high resolution data.
Buske, Christine; Gerlai, Robert
2014-08-30
Vertebrate model organisms have been utilized in high throughput screening but only with substantial cost and human capital investment. The zebrafish is a vertebrate model species that is a promising and cost effective candidate for efficient high throughput screening. Larval zebrafish have already been successfully employed in this regard (Lessman, 2011), but adult zebrafish also show great promise. High throughput screening requires the use of a large number of subjects and collection of substantial amount of data. Collection of data is only one of the demanding aspects of screening. However, in most screening approaches that involve behavioral data the main bottleneck that slows throughput is the time consuming aspect of analysis of the collected data. Some automated analytical tools do exist, but often they only work for one subject at a time, eliminating the possibility of fully utilizing zebrafish as a screening tool. This is a particularly important limitation for such complex phenotypes as social behavior. Testing multiple fish at a time can reveal complex social interactions but it may also allow the identification of outliers from a group of mutagenized or pharmacologically treated fish. Here, we describe a novel method using a custom software tool developed within our laboratory, which enables tracking multiple fish, in combination with a sophisticated analytical approach for summarizing and analyzing high resolution behavioral data. This paper focuses on the latter, the analytic tool, which we have developed using the R programming language and environment for statistical computing. We argue that combining sophisticated data collection methods with appropriate analytical tools will propel zebrafish into the future of neurobehavioral genetic research. Copyright © 2014. Published by Elsevier B.V.
Fundamental limits on dynamic inference from single-cell snapshots
Weinreb, Caleb; Tusi, Betsabeh K.; Socolovsky, Merav
2018-01-01
Single-cell expression profiling reveals the molecular states of individual cells with unprecedented detail. Because these methods destroy cells in the process of analysis, they cannot measure how gene expression changes over time. However, some information on dynamics is present in the data: the continuum of molecular states in the population can reflect the trajectory of a typical cell. Many methods for extracting single-cell dynamics from population data have been proposed. However, all such attempts face a common limitation: for any measured distribution of cell states, there are multiple dynamics that could give rise to it, and by extension, multiple possibilities for underlying mechanisms of gene regulation. Here, we describe the aspects of gene expression dynamics that cannot be inferred from a static snapshot alone and identify assumptions necessary to constrain a unique solution for cell dynamics from static snapshots. We translate these constraints into a practical algorithmic approach, population balance analysis (PBA), which makes use of a method from spectral graph theory to solve a class of high-dimensional differential equations. We use simulations to show the strengths and limitations of PBA, and then apply it to single-cell profiles of hematopoietic progenitor cells (HPCs). Cell state predictions from this analysis agree with HPC fate assays reported in several papers over the past two decades. By highlighting the fundamental limits on dynamic inference faced by any method, our framework provides a rigorous basis for dynamic interpretation of a gene expression continuum and clarifies best experimental designs for trajectory reconstruction from static snapshot measurements. PMID:29463712
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
Cervantes, Emilio; Tocino, Angel
2005-01-01
Structurally, ethylene is the simplest phytohormone and regulates multiple aspects of plant growth and development. Its effects are mediated by a signal transduction cascade involving receptors, MAP kinases and transcription factors. Many morphological effects of ethylene in plant development, including root size, have been previously described. In this article a combined geometric and algebraic approach has been used to analyse the shape and the curvature in the root apex of Arabidopsis seedlings. The process requires the fitting of Bezier curves that reproduce the root apex shape, and the calculation of the corresponding curvatures. The application of the method has allowed us to identify significant differences in the root curvatures of ethylene insensitive mutants (ein2-1 and etr1-1) with respect to the wild-type Columbia.
Application of one-way ANOVA in completely randomized experiments
NASA Astrophysics Data System (ADS)
Wahid, Zaharah; Izwan Latiff, Ahmad; Ahmad, Kartini
2017-12-01
This paper describes an application of a statistical technique one-way ANOVA in completely randomized experiments with three replicates. This technique was employed to a single factor with four levels and multiple observations at each level. The aim of this study is to investigate the relationship between chemical oxygen demand index and location on-sites. Two different approaches are employed for the analyses; critical value and p-value. It also presents key assumptions of the technique to be satisfied by the data in order to obtain valid results. Pairwise comparisons by Turkey method are also considered and discussed to determine where the significant differences among the means is after the ANOVA has been performed. The results revealed that there are statistically significant relationship exist between the chemical oxygen demand index and the location on-sites.
Hauber, A Brett; González, Juan Marcos; Groothuis-Oudshoorn, Catharina G M; Prior, Thomas; Marshall, Deborah A; Cunningham, Charles; IJzerman, Maarten J; Bridges, John F P
2016-06-01
Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice experiments (DCEs), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other stakeholders. Recent consensus-based guidance on good research practices, including two recent task force reports from the International Society for Pharmacoeconomics and Outcomes Research, has aided in improving the quality of conjoint analyses and DCEs in outcomes research. Nevertheless, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There are multiple methods for analyzing DCE data. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting among alternative approaches to conducting statistical analysis of DCE data. We first present a simplistic DCE example and a simple method for using the resulting data. We then present a pedagogical example of a DCE and one of the most common approaches to analyzing data from such a question format-conditional logit. We then describe some common alternative methods for analyzing these data and the strengths and weaknesses of each alternative. We present the ESTIMATE checklist, which includes a list of questions to consider when justifying the choice of analysis method, describing the analysis, and interpreting the results. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A cross-species bi-clustering approach to identifying conserved co-regulated genes.
Sun, Jiangwen; Jiang, Zongliang; Tian, Xiuchun; Bi, Jinbo
2016-06-15
A growing number of studies have explored the process of pre-implantation embryonic development of multiple mammalian species. However, the conservation and variation among different species in their developmental programming are poorly defined due to the lack of effective computational methods for detecting co-regularized genes that are conserved across species. The most sophisticated method to date for identifying conserved co-regulated genes is a two-step approach. This approach first identifies gene clusters for each species by a cluster analysis of gene expression data, and subsequently computes the overlaps of clusters identified from different species to reveal common subgroups. This approach is ineffective to deal with the noise in the expression data introduced by the complicated procedures in quantifying gene expression. Furthermore, due to the sequential nature of the approach, the gene clusters identified in the first step may have little overlap among different species in the second step, thus difficult to detect conserved co-regulated genes. We propose a cross-species bi-clustering approach which first denoises the gene expression data of each species into a data matrix. The rows of the data matrices of different species represent the same set of genes that are characterized by their expression patterns over the developmental stages of each species as columns. A novel bi-clustering method is then developed to cluster genes into subgroups by a joint sparse rank-one factorization of all the data matrices. This method decomposes a data matrix into a product of a column vector and a row vector where the column vector is a consistent indicator across the matrices (species) to identify the same gene cluster and the row vector specifies for each species the developmental stages that the clustered genes co-regulate. Efficient optimization algorithm has been developed with convergence analysis. This approach was first validated on synthetic data and compared to the two-step method and several recent joint clustering methods. We then applied this approach to two real world datasets of gene expression during the pre-implantation embryonic development of the human and mouse. Co-regulated genes consistent between the human and mouse were identified, offering insights into conserved functions, as well as similarities and differences in genome activation timing between the human and mouse embryos. The R package containing the implementation of the proposed method in C ++ is available at: https://github.com/JavonSun/mvbc.git and also at the R platform https://www.r-project.org/ jinbo@engr.uconn.edu. © The Author 2016. Published by Oxford University Press.
Accounting for Multiple Births in Neonatal and Perinatal Trials: Systematic Review and Case Study
Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A
2010-01-01
Objectives To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births. To explore the sensitivity of an actual trial to several analytic approaches to multiples. Methods A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The NO CLD trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using non-clustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. Results In the systematic review, most studies did not describe the randomization of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (p<0.01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. Conclusions The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. PMID:19969305
Application of MSCTA combined with VRT in the operation of cervical dumbbell tumors
Wang, Wan; Lin, Jia; Knosp, Engelbert; Zhao, Yuanzheng; Xiu, Dianhui; Guo, Yongchuan
2015-01-01
Cervical dumbbell tumor poses great difficulties for neurosurgical treatment and incurs remarkable local recurrence rate as the formidable problem for neurosurgery. However, as the routine preoperative evaluation scheme, MRI and CT failed to reveal the mutual three-dimensional relationships between tumor and adjacent structures. Here, we report the clinical application of MSCTA and VRT in three-dimensional reconstruction of cervical dumbbell tumors. From January 2012 to July 2014, 24 patients diagnosed with cervical dumbbell tumor were retrospectively analyzed. All patients enrolled were indicated for preoperative MSCTA/VRT image reconstruction to explore the three-dimensional stereoscopic anatomical relationships among neuroma, spinal cord and vertebral artery to achieve optimal surgical approach from multiple configurations and surgical practice. Three-dimensional mutual anatomical relationships among tumor, adjacent vessels and vertebrae were vividly reconstructed by MSCTA/VRT in all patients in accordance with intraoperative findings. Multiple configurations for optimal surgical approach contribute to total resection of tumor, minimal damage to vessels and nerves, and maximal maintenance of cervical spine stability. Preoperative MSCTA/VRT contributes to reconstruction of three-dimensional stereoscopic anatomical relationships between cervical dumbbell tumor and adjacent structures for optimal surgical approach by multiple configurations and reduction of intraoperative damages and postoperative complications. PMID:26550385
Application of MSCTA combined with VRT in the operation of cervical dumbbell tumors.
Wang, Wan; Lin, Jia; Knosp, Engelbert; Zhao, Yuanzheng; Xiu, Dianhui; Guo, Yongchuan
2015-01-01
Cervical dumbbell tumor poses great difficulties for neurosurgical treatment and incurs remarkable local recurrence rate as the formidable problem for neurosurgery. However, as the routine preoperative evaluation scheme, MRI and CT failed to reveal the mutual three-dimensional relationships between tumor and adjacent structures. Here, we report the clinical application of MSCTA and VRT in three-dimensional reconstruction of cervical dumbbell tumors. From January 2012 to July 2014, 24 patients diagnosed with cervical dumbbell tumor were retrospectively analyzed. All patients enrolled were indicated for preoperative MSCTA/VRT image reconstruction to explore the three-dimensional stereoscopic anatomical relationships among neuroma, spinal cord and vertebral artery to achieve optimal surgical approach from multiple configurations and surgical practice. Three-dimensional mutual anatomical relationships among tumor, adjacent vessels and vertebrae were vividly reconstructed by MSCTA/VRT in all patients in accordance with intraoperative findings. Multiple configurations for optimal surgical approach contribute to total resection of tumor, minimal damage to vessels and nerves, and maximal maintenance of cervical spine stability. Preoperative MSCTA/VRT contributes to reconstruction of three-dimensional stereoscopic anatomical relationships between cervical dumbbell tumor and adjacent structures for optimal surgical approach by multiple configurations and reduction of intraoperative damages and postoperative complications.
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Auerbach, Nancy A; Tulloch, Ayesha I T; Possingham, Hugh P
Conservation practitioners, faced with managing multiple threats to biodiversity and limited funding, must prioritize investment in different management actions. From an economic perspective, it is routine practice to invest where the highest rate of return is expected. This return-on-investment (ROI) thinking can also benefit species conservation, and researchers are developing sophisticated approaches to support decision-making for cost-effective conservation. However, applied use of these approaches is limited. Managers may be wary of “black-box” algorithms or complex methods that are difficult to explain to funding agencies. As an alternative, we demonstrate the use of a basic ROI analysis for determining where to invest in cost-effective management to address threats to species. This method can be applied using basic geographic information system and spreadsheet calculations. We illustrate the approach in a management action prioritization for a biodiverse region of eastern Australia. We use ROI to prioritize management actions for two threats to a suite of threatened species: habitat degradation by cattle grazing, and predation by invasive red foxes (Vulpes vulpes). We show how decisions based on cost-effective threat management depend upon how expected benefits to species are defined and how benefits and costs co-vary. By considering a combination of species richness, restricted habitats, species vulnerability, and costs of management actions, small investments can result in greater expected benefit compared with management decisions that consider only species richness. Furthermore, a landscape management strategy that implements multiple actions is more efficient than managing only for one threat, or more traditional approaches that don't consider ROI. Our approach provides transparent and logical decision support for prioritizing different actions intended to abate threats associated with multiple species; it is of use when managers need a justifiable and repeatable approach to investment.
Foreground Mitigation in the Epoch of Reionization
NASA Astrophysics Data System (ADS)
Chapman, Emma
2018-05-01
The EoR foregrounds can be up to three magnitudes greater than the cosmological signal we wish to detect. Multiple methods have been developed in order to extract the cosmological signal, falling roughly into three categories: foreground removal, foreground suppression and foreground avoidance. These main approaches are briefly discussed in this review and consideration taken to the future application of these methods as a multi-layered approach.
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
NASA Astrophysics Data System (ADS)
Leonardi, Marcelo
The primary purpose of this study was to examine the impact of a scheduling change from a trimester 4x4 block schedule to a modified hybrid schedule on student achievement in ninth grade biology courses. This study examined the impact of the scheduling change on student achievement through teacher created benchmark assessments in Genetics, DNA, and Evolution and on the California Standardized Test in Biology. The secondary purpose of this study examined the ninth grade biology teacher perceptions of ninth grade biology student achievement. Using a mixed methods research approach, data was collected both quantitatively and qualitatively as aligned to research questions. Quantitative methods included gathering data from departmental benchmark exams and California Standardized Test in Biology and conducting multiple analysis of covariance and analysis of covariance to determine significance differences. Qualitative methods include journal entries questions and focus group interviews. The results revealed a statistically significant increase in scores on both the DNA and Evolution benchmark exams. DNA and Evolution benchmark exams showed significant improvements from a change in scheduling format. The scheduling change was responsible for 1.5% of the increase in DNA benchmark scores and 2% of the increase in Evolution benchmark scores. The results revealed a statistically significant decrease in scores on the Genetics Benchmark exam as a result of the scheduling change. The scheduling change was responsible for 1% of the decrease in Genetics benchmark scores. The results also revealed a statistically significant increase in scores on the CST Biology exam. The scheduling change was responsible for .7% of the increase in CST Biology scores. Results of the focus group discussions indicated that all teachers preferred the modified hybrid schedule over the trimester schedule and that it improved student achievement.
The SAGE Model of Social Psychological Research.
Power, Séamus A; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-05-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed.
McFarquhar, Martyn; McKie, Shane; Emsley, Richard; Suckling, John; Elliott, Rebecca; Williams, Stephen
2016-01-01
Repeated measurements and multimodal data are common in neuroimaging research. Despite this, conventional approaches to group level analysis ignore these repeated measurements in favour of multiple between-subject models using contrasts of interest. This approach has a number of drawbacks as certain designs and comparisons of interest are either not possible or complex to implement. Unfortunately, even when attempting to analyse group level data within a repeated-measures framework, the methods implemented in popular software packages make potentially unrealistic assumptions about the covariance structure across the brain. In this paper, we describe how this issue can be addressed in a simple and efficient manner using the multivariate form of the familiar general linear model (GLM), as implemented in a new MATLAB toolbox. This multivariate framework is discussed, paying particular attention to methods of inference by permutation. Comparisons with existing approaches and software packages for dependent group-level neuroimaging data are made. We also demonstrate how this method is easily adapted for dependency at the group level when multiple modalities of imaging are collected from the same individuals. Follow-up of these multimodal models using linear discriminant functions (LDA) is also discussed, with applications to future studies wishing to integrate multiple scanning techniques into investigating populations of interest. PMID:26921716
Statistics of dislocation pinning at localized obstacles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutta, A.; Bhattacharya, M., E-mail: mishreyee@vecc.gov.in; Barat, P.
2014-10-14
Pinning of dislocations at nanosized obstacles like precipitates, voids, and bubbles is a crucial mechanism in the context of phenomena like hardening and creep. The interaction between such an obstacle and a dislocation is often studied at fundamental level by means of analytical tools, atomistic simulations, and finite element methods. Nevertheless, the information extracted from such studies cannot be utilized to its maximum extent on account of insufficient information about the underlying statistics of this process comprising a large number of dislocations and obstacles in a system. Here, we propose a new statistical approach, where the statistics of pinning ofmore » dislocations by idealized spherical obstacles is explored by taking into account the generalized size-distribution of the obstacles along with the dislocation density within a three-dimensional framework. Starting with a minimal set of material parameters, the framework employs the method of geometrical statistics with a few simple assumptions compatible with the real physical scenario. The application of this approach, in combination with the knowledge of fundamental dislocation-obstacle interactions, has successfully been demonstrated for dislocation pinning at nanovoids in neutron irradiated type 316-stainless steel in regard to the non-conservative motion of dislocations. An interesting phenomenon of transition from rare pinning to multiple pinning regimes with increasing irradiation temperature is revealed.« less
Reid, Mark E.; Coe, Jeffrey A.; Brien, Dianne
2016-01-01
Many debris flows increase in volume as they travel downstream, enhancing their mobility and hazard. Volumetric growth can result from diverse physical processes, such as channel sediment entrainment, stream bank collapse, adjacent landsliding, hillslope erosion and rilling, and coalescence of multiple debris flows; incorporating these varied phenomena into physics-based debris-flow models is challenging. As an alternative, we embedded effects of debris-flow growth into an empirical/statistical approach to forecast potential inundation areas within digital landscapes in a GIS framework. Our approach used an empirical debris-growth function to account for the effects of growth phenomena. We applied this methodology to a debris-flow-prone area in the Oregon Coast Range, USA, where detailed mapping revealed areas of erosion and deposition along paths of debris flows that occurred during a large storm in 1996. Erosion was predominant in stream channels with slopes > 5°. Using pre- and post-event aerial photography, we derived upslope contributing area and channel-length growth factors. Our method reproduced the observed inundation patterns produced by individual debris flows; it also generated reproducible, objective potential inundation maps for entire drainage networks. These maps better matched observations than those using previous methods that focus on proximal or distal regions of a drainage network.
Galbally, Javier; Marcel, Sébastien; Fierrez, Julian
2014-02-01
To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.
NASA Astrophysics Data System (ADS)
Shenoy Handiru, Vikram; Vinod, A. P.; Guan, Cuntai
2017-08-01
Objective. In electroencephalography (EEG)-based brain-computer interface (BCI) systems for motor control tasks the conventional practice is to decode motor intentions by using scalp EEG. However, scalp EEG only reveals certain limited information about the complex tasks of movement with a higher degree of freedom. Therefore, our objective is to investigate the effectiveness of source-space EEG in extracting relevant features that discriminate arm movement in multiple directions. Approach. We have proposed a novel feature extraction algorithm based on supervised factor analysis that models the data from source-space EEG. To this end, we computed the features from the source dipoles confined to Brodmann areas of interest (BA4a, BA4p and BA6). Further, we embedded class-wise labels of multi-direction (multi-class) source-space EEG to an unsupervised factor analysis to make it into a supervised learning method. Main Results. Our approach provided an average decoding accuracy of 71% for the classification of hand movement in four orthogonal directions, that is significantly higher (>10%) than the classification accuracy obtained using state-of-the-art spatial pattern features in sensor space. Also, the group analysis on the spectral characteristics of source-space EEG indicates that the slow cortical potentials from a set of cortical source dipoles reveal discriminative information regarding the movement parameter, direction. Significance. This study presents evidence that low-frequency components in the source space play an important role in movement kinematics, and thus it may lead to new strategies for BCI-based neurorehabilitation.
A biomaterial screening approach reveals microenvironmental mechanisms of drug resistance.
Schwartz, Alyssa D; Barney, Lauren E; Jansen, Lauren E; Nguyen, Thuy V; Hall, Christopher L; Meyer, Aaron S; Peyton, Shelly R
2017-12-11
Traditional drug screening methods lack features of the tumor microenvironment that contribute to resistance. Most studies examine cell response in a single biomaterial platform in depth, leaving a gap in understanding how extracellular signals such as stiffness, dimensionality, and cell-cell contacts act independently or are integrated within a cell to affect either drug sensitivity or resistance. This is critically important, as adaptive resistance is mediated, at least in part, by the extracellular matrix (ECM) of the tumor microenvironment. We developed an approach to screen drug responses in cells cultured on 2D and in 3D biomaterial environments to explore how key features of ECM mediate drug response. This approach uncovered that cells on 2D hydrogels and spheroids encapsulated in 3D hydrogels were less responsive to receptor tyrosine kinase (RTK)-targeting drugs sorafenib and lapatinib, but not cytotoxic drugs, compared to single cells in hydrogels and cells on plastic. We found that transcriptomic differences between these in vitro models and tumor xenografts did not reveal mechanisms of ECM-mediated resistance to sorafenib. However, a systems biology analysis of phospho-kinome data uncovered that variation in MEK phosphorylation was associated with RTK-targeted drug resistance. Using sorafenib as a model drug, we found that co-administration with a MEK inhibitor decreased ECM-mediated resistance in vitro and reduced in vivo tumor burden compared to sorafenib alone. In sum, we provide a novel strategy for identifying and overcoming ECM-mediated resistance mechanisms by performing drug screening, phospho-kinome analysis, and systems biology across multiple biomaterial environments.
A Method for Multitask fMRI Data Fusion Applied to Schizophrenia
Calhoun, Vince D.; Adali, Tulay; Kiehl, Kent A.; Astur, Robert; Pekar, James J.; Pearlson, Godfrey D.
2009-01-01
It is becoming common to collect data from multiple functional magnetic resonance imaging (fMRI) paradigms on a single individual. The data from these experiments are typically analyzed separately and sometimes directly subtracted from one another on a voxel-by-voxel basis. These comparative approaches, although useful, do not directly attempt to examine potential commonalities between tasks and between voxels. To remedy this we propose a method to extract maximally spatially independent maps for each task that are “coupled” together by a shared loading parameter. We first compute an activation map for each task and each individual as “features, ” which are then used to perform joint independent component analysis (jICA) on the group data. We demonstrate our approach on a data set derived from healthy controls and schizophrenia patients, each of which carried out an auditory oddball task and a Sternberg working memory task. Our analysis approach revealed two interesting findings in the data that were missed with traditional analyses. First, consistent with our hypotheses, schizophrenia patients demonstrate “decreased” connectivity in a joint network including portions of regions implicated in two prevalent models of schizophrenia. A second finding is that for the voxels identified by the jICA analysis, the correlation between the two tasks was significantly higher in patients than in controls. This finding suggests that schizophrenia patients activate “more similarly” for both tasks than do controls. A possible synthesis of both findings is that patients are activating less, but also activating with a less-unique set of regions for these very different tasks. Both of the findings described support the claim that examination of joint activation across multiple tasks can enable new questions to be posed about fMRI data. Our approach can also be applied to data using more than two tasks. It thus provides a way to integrate and probe brain networks using a variety of tasks and may increase our understanding of coordinated brain networks and the impact of pathology upon them. PMID:16342150
Gulmans, J; Vollenbroek-Hutten, M M R; Van Gemert-Pijnen, J E W C; Van Harten, W H
2007-10-01
Owing to the involvement of multiple professionals from various institutions, integrated care settings are prone to suboptimal patient care communication. To assure continuity, communication gaps should be identified for targeted improvement initiatives. However, available assessment methods are often one-sided evaluations not appropriate for integrated care settings. We developed an evaluation approach that takes into account the multiple communication links and evaluation perspectives inherent to these settings. In this study, we describe this approach, using the integrated care setting of Cerebral Palsy as illustration. The approach follows a three-step mixed design in which the results of each step are used to mark out the subsequent step's focus. The first step patient questionnaire aims to identify quality gaps experienced by patients, comparing their expectancies and experiences with respect to patient-professional and inter-professional communication. Resulting gaps form the input of in-depth interviews with a subset of patients to evaluate underlying factors of ineffective communication. Resulting factors form the input of the final step's focus group meetings with professionals to corroborate and complete the findings. By combining methods, the presented approach aims to minimize limitations inherent to the application of single methods. The comprehensiveness of the approach enables its applicability in various integrated care settings. Its sequential design allows for in-depth evaluation of relevant quality gaps. Further research is needed to evaluate the approach's feasibility in practice. In our subsequent study, we present the results of the approach in the integrated care setting of children with Cerebral Palsy in three Dutch care regions.
Zemali, El-Amine; Boukra, Abdelmadjid
2015-08-01
The multiple sequence alignment (MSA) is one of the most challenging problems in bioinformatics, it involves discovering similarity between a set of protein or DNA sequences. This paper introduces a new method for the MSA problem called biogeography-based optimization with multiple populations (BBOMP). It is based on a recent metaheuristic inspired from the mathematics of biogeography named biogeography-based optimization (BBO). To improve the exploration ability of BBO, we have introduced a new concept allowing better exploration of the search space. It consists of manipulating multiple populations having each one its own parameters. These parameters are used to build up progressive alignments allowing more diversity. At each iteration, the best found solution is injected in each population. Moreover, to improve solution quality, six operators are defined. These operators are selected with a dynamic probability which changes according to the operators efficiency. In order to test proposed approach performance, we have considered a set of datasets from Balibase 2.0 and compared it with many recent algorithms such as GAPAM, MSA-GA, QEAMSA and RBT-GA. The results show that the proposed approach achieves better average score than the previously cited methods.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Zhu, Xiaofeng; Feng, Tao; Tayo, Bamidele O; Liang, Jingjing; Young, J Hunter; Franceschini, Nora; Smith, Jennifer A; Yanek, Lisa R; Sun, Yan V; Edwards, Todd L; Chen, Wei; Nalls, Mike; Fox, Ervin; Sale, Michele; Bottinger, Erwin; Rotimi, Charles; Liu, Yongmei; McKnight, Barbara; Liu, Kiang; Arnett, Donna K; Chakravati, Aravinda; Cooper, Richard S; Redline, Susan
2015-01-08
Genome-wide association studies (GWASs) have identified many genetic variants underlying complex traits. Many detected genetic loci harbor variants that associate with multiple-even distinct-traits. Most current analysis approaches focus on single traits, even though the final results from multiple traits are evaluated together. Such approaches miss the opportunity to systemically integrate the phenome-wide data available for genetic association analysis. In this study, we propose a general approach that can integrate association evidence from summary statistics of multiple traits, either correlated, independent, continuous, or binary traits, which might come from the same or different studies. We allow for trait heterogeneity effects. Population structure and cryptic relatedness can also be controlled. Our simulations suggest that the proposed method has improved statistical power over single-trait analysis in most of the cases we studied. We applied our method to the Continental Origins and Genetic Epidemiology Network (COGENT) African ancestry samples for three blood pressure traits and identified four loci (CHIC2, HOXA-EVX1, IGFBP1/IGFBP3, and CDH17; p < 5.0 × 10(-8)) associated with hypertension-related traits that were missed by a single-trait analysis in the original report. Six additional loci with suggestive association evidence (p < 5.0 × 10(-7)) were also observed, including CACNA1D and WNT3. Our study strongly suggests that analyzing multiple phenotypes can improve statistical power and that such analysis can be executed with the summary statistics from GWASs. Our method also provides a way to study a cross phenotype (CP) association by using summary statistics from GWASs of multiple phenotypes. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Mahler, Joni D.
2011-01-01
This study examined whether a story/language based method of teaching the multiplication facts would be helpful to students who previously had difficulty with the memorization of those facts. Using the curriculum "Memorize in Minutes" by Alan Walker (Walker, 2000), the researcher taught six fourth-grade students the multiplication facts (3s…
The Hunger Games: Salmonella, Anorexia, and NLRP3.
O'Neill, Luke A J
2017-02-07
Rao and colleagues (2017) reveal how Salmonella limits anorexia in mice, protecting them and promoting the spread of infection. The mechanism involves inhibition of the NLRP3 inflammasome limiting vagal nerve stimulation by IL-1β, which in turn promotes appetite. A possible new therapeutic approach for treating anorexia in multiple diseases is proposed. Copyright © 2017 Elsevier Inc. All rights reserved.
Local discretization method for overdamped Brownian motion on a potential with multiple deep wells.
Nguyen, P T T; Challis, K J; Jack, M W
2016-11-01
We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.
Local discretization method for overdamped Brownian motion on a potential with multiple deep wells
NASA Astrophysics Data System (ADS)
Nguyen, P. T. T.; Challis, K. J.; Jack, M. W.
2016-11-01
We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
NASA Astrophysics Data System (ADS)
Schrön, Martin; Köhli, Markus; Scheiffele, Lena; Iwema, Joost; Bogena, Heye R.; Lv, Ling; Martini, Edoardo; Baroni, Gabriele; Rosolem, Rafael; Weimar, Jannis; Mai, Juliane; Cuntz, Matthias; Rebmann, Corinna; Oswald, Sascha E.; Dietrich, Peter; Schmidt, Ulrich; Zacharias, Steffen
2017-10-01
In the last few years the method of cosmic-ray neutron sensing (CRNS) has gained popularity among hydrologists, physicists, and land-surface modelers. The sensor provides continuous soil moisture data, averaged over several hectares and tens of decimeters in depth. However, the signal still may contain unidentified features of hydrological processes, and many calibration datasets are often required in order to find reliable relations between neutron intensity and water dynamics. Recent insights into environmental neutrons accurately described the spatial sensitivity of the sensor and thus allowed one to quantify the contribution of individual sample locations to the CRNS signal. Consequently, data points of calibration and validation datasets are suggested to be averaged using a more physically based weighting approach. In this work, a revised sensitivity function is used to calculate weighted averages of point data. The function is different from the simple exponential convention by the extraordinary sensitivity to the first few meters around the probe, and by dependencies on air pressure, air humidity, soil moisture, and vegetation. The approach is extensively tested at six distinct monitoring sites: two sites with multiple calibration datasets and four sites with continuous time series datasets. In all cases, the revised averaging method improved the performance of the CRNS products. The revised approach further helped to reveal hidden hydrological processes which otherwise remained unexplained in the data or were lost in the process of overcalibration. The presented weighting approach increases the overall accuracy of CRNS products and will have an impact on all their applications in agriculture, hydrology, and modeling.
Zhu, X Q; Gasser, R B
1998-06-01
In this study, we assessed single-strand conformation polymorphism (SSCP)-based approaches for their capacity to fingerprint sequence variation in ribosomal DNA (rDNA) of ascaridoid nematodes of veterinary and/or human health significance. The second internal transcribed spacer region (ITS-2) of rDNA was utilised as the target region because it is known to provide species-specific markers for this group of parasites. ITS-2 was amplified by PCR from genomic DNA derived from individual parasites and subjected to analysis. Direct SSCP analysis of amplicons from seven taxa (Toxocara vitulorum, Toxocara cati, Toxocara canis, Toxascaris leonina, Baylisascaris procyonis, Ascaris suum and Parascaris equorum) showed that the single-strand (ss) ITS-2 patterns produced allowed their unequivocal identification to species. While no variation in SSCP patterns was detected in the ITS-2 within four species for which multiple samples were available, the method allowed the direct display of four distinct sequence types of ITS-2 among individual worms of T. cati. Comparison of SSCP/sequencing with the methods of dideoxy fingerprinting (ddF) and restriction endonuclease fingerprinting (REF) revealed that also ddF allowed the definition of the four sequence types, whereas REF displayed three of four. The findings indicate the usefulness of the SSCP-based approaches for the identification of ascaridoid nematodes to species, the direct display of sequence variation in rDNA and the detection of population variation. The ability to fingerprint microheterogeneity in ITS-2 rDNA using such approaches also has implications for studying fundamental aspects relating to mutational change in rDNA.
Mechanism of chimera formation during the Multiple Displacement Amplification reaction.
Lasken, Roger S; Stockwell, Timothy B
2007-04-12
Multiple Displacement Amplification (MDA) is a method used for amplifying limiting DNA sources. The high molecular weight amplified DNA is ideal for DNA library construction. While this has enabled genomic sequencing from one or a few cells of unculturable microorganisms, the process is complicated by the tendency of MDA to generate chimeric DNA rearrangements in the amplified DNA. Determining the source of the DNA rearrangements would be an important step towards reducing or eliminating them. Here, we characterize the major types of chimeras formed by carrying out an MDA whole genome amplification from a single E. coli cell and sequencing by the 454 Life Sciences method. Analysis of 475 chimeras revealed the predominant reaction mechanisms that create the DNA rearrangements. The highly branched DNA synthesized in MDA can assume many alternative secondary structures. DNA strands extended on an initial template can be displaced becoming available to prime on a second template creating the chimeras. Evidence supports a model in which branch migration can displace 3'-ends freeing them to prime on the new templates. More than 85% of the resulting DNA rearrangements were inverted sequences with intervening deletions that the model predicts. Intramolecular rearrangements were favored, with displaced 3'-ends reannealing to single stranded 5'-strands contained within the same branched DNA molecule. In over 70% of the chimeric junctions, the 3' termini had initiated priming at complimentary sequences of 2-21 nucleotides (nts) in the new templates. Formation of chimeras is an important limitation to the MDA method, particularly for whole genome sequencing. Identification of the mechanism for chimera formation provides new insight into the MDA reaction and suggests methods to reduce chimeras. The 454 sequencing approach used here will provide a rapid method to assess the utility of reaction modifications.
Mechanism of chimera formation during the Multiple Displacement Amplification reaction
Lasken, Roger S; Stockwell, Timothy B
2007-01-01
Background Multiple Displacement Amplification (MDA) is a method used for amplifying limiting DNA sources. The high molecular weight amplified DNA is ideal for DNA library construction. While this has enabled genomic sequencing from one or a few cells of unculturable microorganisms, the process is complicated by the tendency of MDA to generate chimeric DNA rearrangements in the amplified DNA. Determining the source of the DNA rearrangements would be an important step towards reducing or eliminating them. Results Here, we characterize the major types of chimeras formed by carrying out an MDA whole genome amplification from a single E. coli cell and sequencing by the 454 Life Sciences method. Analysis of 475 chimeras revealed the predominant reaction mechanisms that create the DNA rearrangements. The highly branched DNA synthesized in MDA can assume many alternative secondary structures. DNA strands extended on an initial template can be displaced becoming available to prime on a second template creating the chimeras. Evidence supports a model in which branch migration can displace 3'-ends freeing them to prime on the new templates. More than 85% of the resulting DNA rearrangements were inverted sequences with intervening deletions that the model predicts. Intramolecular rearrangements were favored, with displaced 3'-ends reannealing to single stranded 5'-strands contained within the same branched DNA molecule. In over 70% of the chimeric junctions, the 3' termini had initiated priming at complimentary sequences of 2–21 nucleotides (nts) in the new templates. Conclusion Formation of chimeras is an important limitation to the MDA method, particularly for whole genome sequencing. Identification of the mechanism for chimera formation provides new insight into the MDA reaction and suggests methods to reduce chimeras. The 454 sequencing approach used here will provide a rapid method to assess the utility of reaction modifications. PMID:17430586
Kim, Eun Sook; Cao, Chunhua
2015-01-01
Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.
From Continuous Improvement to Organisational Learning: Developmental Theory.
ERIC Educational Resources Information Center
Murray, Peter; Chapman, Ross
2003-01-01
Explores continuous improvement methods, which underlie total quality management, finding barriers to implementation in practice that are related to a one-dimensional approach. Suggests a multiple, unbounded learning cycle, a holistic approach that includes adaptive learning, learning styles, generative learning, and capability development.…
Blatti, Charles; Sinha, Saurabh
2016-07-15
Analysis of co-expressed gene sets typically involves testing for enrichment of different annotations or 'properties' such as biological processes, pathways, transcription factor binding sites, etc., one property at a time. This common approach ignores any known relationships among the properties or the genes themselves. It is believed that known biological relationships among genes and their many properties may be exploited to more accurately reveal commonalities of a gene set. Previous work has sought to achieve this by building biological networks that combine multiple types of gene-gene or gene-property relationships, and performing network analysis to identify other genes and properties most relevant to a given gene set. Most existing network-based approaches for recognizing genes or annotations relevant to a given gene set collapse information about different properties to simplify (homogenize) the networks. We present a network-based method for ranking genes or properties related to a given gene set. Such related genes or properties are identified from among the nodes of a large, heterogeneous network of biological information. Our method involves a random walk with restarts, performed on an initial network with multiple node and edge types that preserve more of the original, specific property information than current methods that operate on homogeneous networks. In this first stage of our algorithm, we find the properties that are the most relevant to the given gene set and extract a subnetwork of the original network, comprising only these relevant properties. We then re-rank genes by their similarity to the given gene set, based on a second random walk with restarts, performed on the above subnetwork. We demonstrate the effectiveness of this algorithm for ranking genes related to Drosophila embryonic development and aggressive responses in the brains of social animals. DRaWR was implemented as an R package available at veda.cs.illinois.edu/DRaWR. blatti@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
An evaluation of problem-based learning in a nursing theory and practice module.
Barrow, Elizabeth J; Lyte, Geraldine; Butterworth, Tony
2002-03-01
Interest in Problem-Based Learning (PBL) within nurse education has increased internationally in recent years. The expectations of this teaching/learning strategy are that it will enable nurses to develop skills required for professional practice including: enquiry, reasoning, interpersonal and lifelong learning skills. However, to date, there is little empirical evidence within nursing literature to support such expectations. This study evaluated the reiterative PBL approach in an undergraduate programme within one University. The Responsive Evaluation Model (Guba & Lincoln 1989) guided the design of the study, permitting multiple methods of observation, focus group interview s and a questionnaire. Findings revealed an overall positive student experience of PBL. However, many students found PBL initially stressful due to the deliberately ambiguous nature of the scenario and the requirement upon students to direct their own le arning. The tutor role was unclear to some students, while others found the facilitative approach empowering. Recommendations are offered which may be of value to students, teachers and practitioners implementing and facilitating PBL within Making A Difference curricula (Department of Health 1999).
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation
French, Michael T.; Fang, Hai
2010-01-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than ten years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost of society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. PMID:20071107
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
The cost of crime to society: new crime-specific estimates for policy and program evaluation.
McCollister, Kathryn E; French, Michael T; Fang, Hai
2010-04-01
Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Opportunistic tri-band carrier aggregation in licensed spectrum for multi-operator 5G hetnet
NASA Astrophysics Data System (ADS)
Maksymuk, Taras; Kyryk, Maryan; Klymash, Mykhailo; Jo, Minho; Romaniuk, Ryszard; Kotyra, Andrzej; Zhanpeisova, Aizhan; Kozbekova, Ainur
2017-08-01
Increasing capacity of mobile networks is a real challenge due to rapid increasing of traffic demands and spectrum scarcity. Carrier aggregation technology is aimed to increase the user data rate by combining the throughput of few spectrum bands, even if they are not physically collocated. Utilization of unlicensed Wi-Fi 5 GHz band for mobile transmission opens new perspectives for carrier aggregation due to vast amount of spectrum range, which can be available for aggregation to supplement data rates for end users. There are many solutions proposed to enable mobile data transmission in unlicensed band without disturbing interference for the existing Wi-Fi users. The paper presents a new approach for opportunistic carrier aggregation in licensed and unlicensed band for multi-operator 5G network. It allows multiple network operators to utilize unlicensed spectrum opportunistically if it is not currently used by Wi-Fi or other mobile network operators. Performance of the proposed approach has been simulated in case of two competing operators. Simulation results reveal that applying the proposed method ensures achieving satisfactory performance of carrier aggregation for the case of two network operators.
Tactical resource allocation and elective patient admission planning in care processes.
Hulshof, Peter J H; Boucherie, Richard J; Hans, Erwin W; Hurink, Johann L
2013-06-01
Tactical planning of resources in hospitals concerns elective patient admission planning and the intermediate term allocation of resource capacities. Its main objectives are to achieve equitable access for patients, to meet production targets/to serve the strategically agreed number of patients, and to use resources efficiently. This paper proposes a method to develop a tactical resource allocation and elective patient admission plan. These tactical plans allocate available resources to various care processes and determine the selection of patients to be served that are at a particular stage of their care process. Our method is developed in a Mixed Integer Linear Programming (MILP) framework and copes with multiple resources, multiple time periods and multiple patient groups with various uncertain treatment paths through the hospital, thereby integrating decision making for a chain of hospital resources. Computational results indicate that our method leads to a more equitable distribution of resources and provides control of patient access times, the number of patients served and the fraction of allocated resource capacity. Our approach is generic, as the base MILP and the solution approach allow for including various extensions to both the objective criteria and the constraints. Consequently, the proposed method is applicable in various settings of tactical hospital management.
Koo, Hyun Jung; Kim, Mi Young; Koo, Ja Hwan; Sung, Yu Sub; Jung, Jiwon; Kim, Sung-Han; Choi, Chang-Min; Kim, Hwa Jung
2017-01-01
Radiologists have used margin characteristics based on routine visual analysis; however, the attenuation changes at the margin of the lesion on CT images have not been quantitatively assessed. We established a CT-based margin analysis method by comparing a target lesion with normal lung attenuation, drawing a slope to represent the attenuation changes. This approach was applied to patients with invasive mucinous adenocarcinoma (n = 40) or bacterial pneumonia (n = 30). Correlations among multiple regions of interest (ROIs) were obtained using intraclass correlation coefficient (ICC) values. CT visual assessment, margin and texture parameters were compared for differentiating the two disease entities. The attenuation and margin parameters in multiple ROIs showed excellent ICC values. Attenuation slopes obtained at the margins revealed a difference between invasive mucinous adenocarcinoma and pneumonia (P<0.001), and mucinous adenocarcinoma produced a sharply declining attenuation slope. On multivariable logistic regression analysis, pneumonia had an ill-defined margin (odds ratio (OR), 4.84; 95% confidence interval (CI), 1.26-18.52; P = 0.02), ground-glass opacity (OR, 8.55; 95% CI, 2.09-34.95; P = 0.003), and gradually declining attenuation at the margin (OR, 12.63; 95% CI, 2.77-57.51, P = 0.001). CT-based margin analysis method has a potential to act as an imaging parameter for differentiating invasive mucinous adenocarcinoma and bacterial pneumonia.
Arcuri, G G; McMullan, A E; Murray, A E; Silver, L K; Bergthorson, M; Dahan-Oliel, N; Coutinho, F
2016-03-01
Family-centred services (FCS) are best practice in paediatric rehabilitation and describe philosophies and approaches to medical care that emphasize the partnership and involvement of parents. While evidence supports FCS, there are complexities to its successful implementation. This mixed-methods study aimed to measure the extent to which parents and the healthcare provider (HCP) perceive service provision as being family centred, and to describe barriers and facilitators to the delivery of FCS. Parents of children participating in a rehabilitation programme and HCPs providing services participated in this study. Parents completed the measure of processes of care-20 and participated in interviews, while HCPs completed the measure of processes of care-service providers and participated in a focus group. Quantitative analysis revealed that parents were mostly satisfied with features of FCS, which included communication and support between parents and HCPs, respect of diversity and parental collaboration and participation. Parents identified communication methods and psychosocial needs as areas that facilitated but sometimes detracted from FCS. Institutional barriers led to the identification of areas for improvement identified by multiple stakeholders. HCPs identified more areas for improvement than parents. When considering these barriers, it is evident that implementation is a complex process, impacted by institutional barriers. FCS needs to be investigated further, and systemic interventions should be used to facilitate its implementation. © 2015 John Wiley & Sons Ltd.
A Tutorial on Multiple Testing: False Discovery Control
NASA Astrophysics Data System (ADS)
Chatelain, F.
2016-09-01
This paper presents an overview of criteria and methods in multiple testing, with an emphasis on the false discovery rate control. The popular Benjamini and Hochberg procedure is described. The rationale for this approach is explained through a simple Bayesian interpretation. Some state-of-the-art variations and extensions are also presented.
ERIC Educational Resources Information Center
Weaver, Adam D.; McKevitt, Brian C.; Farris, Allie M.
2017-01-01
Multiple-stimulus without replacement preference assessment is a research-based method for identifying appropriate rewards for students with emotional and behavioral disorders. This article presents a brief history of how this technology evolved and describes a step-by-step approach for conducting the procedure. A discussion of necessary materials…
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
Multi-processing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
The MIMD concept is applied, through multitasking, with relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. An existing single processor algorithm is mapped without the need for developing a new algorithm. The procedure of designing a code utilizing this approach is automated with the Unix stream editor. A Multiple Processor Multiple Grid (MPMG) code is developed as a demonstration of this approach. This code solves the three-dimensional, Reynolds-averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. This solver is applied to a generic, oblique-wing aircraft problem on a four-processor computer using one process for data management and nonparallel computations and three processes for pseudotime advance on three different grid systems.
Nonlinear two-dimensional terahertz photon echo and rotational spectroscopy in the gas phase.
Lu, Jian; Zhang, Yaqing; Hwang, Harold Y; Ofori-Okai, Benjamin K; Fleischer, Sharly; Nelson, Keith A
2016-10-18
Ultrafast 2D spectroscopy uses correlated multiple light-matter interactions for retrieving dynamic features that may otherwise be hidden under the linear spectrum; its extension to the terahertz regime of the electromagnetic spectrum, where a rich variety of material degrees of freedom reside, remains an experimental challenge. We report a demonstration of ultrafast 2D terahertz spectroscopy of gas-phase molecular rotors at room temperature. Using time-delayed terahertz pulse pairs, we observe photon echoes and other nonlinear signals resulting from molecular dipole orientation induced by multiple terahertz field-dipole interactions. The nonlinear time domain orientation signals are mapped into the frequency domain in 2D rotational spectra that reveal J-state-resolved nonlinear rotational dynamics. The approach enables direct observation of correlated rotational transitions and may reveal rotational coupling and relaxation pathways in the ground electronic and vibrational state.
Kobayashi, Keigo; Naoki, Katsuhiko; Kuroda, Aoi; Yasuda, Hiroyuki; Kawada, Ichiro; Soejima, Kenzo; Betsuyaku, Tomoko
2017-01-01
A 69-year-old man with post-operative recurrence of lung adenocarcinoma was treated with multiple chemotherapies, including epidermal growth factor receptor (EGFR)-tyrosine kinase inhibitors. A second biopsy revealed an EGFR T790M mutation. As 10th-line chemotherapy, osimertinib was initiated. After 24 weeks, chest computed tomography (CT) revealed asymptomatic ground-glass opacities in both lobes. After four weeks of osimertinib discontinuation, imaging revealed rapid lung cancer progression. Osimertinib was resumed. After 11 weeks, CT revealed decreased lung nodules with no exacerbation of interstitial lung disease. We describe a patient who experienced transient asymptomatic pulmonary opacities during treatment with osimertinib, which was successfully managed by a “stop-and-go” approach. PMID:29269665
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Spoilt for choice - A comparison of downscaling approaches for hydrological impact studies
NASA Astrophysics Data System (ADS)
Rössler, Ole; Fischer, Andreas; Kotlarski, Sven; Keller, Denise; Liniger, Mark; Weingartner, Rolf
2017-04-01
With the increasing number of available climate downscaling approaches, users are often faced with the luxury problem of which downscaling method to apply in a climate change impact assessment study. In Switzerland, for instance, the new generation of local scale climate scenarios CH2018 will be based on quantile mapping (QM), replacing the previous delta change (DC) method. Parallel to those two methods, a multi-site weather generator (WG) was developed to meet specific user needs. The question poses which downscaling method is the most suitable for a given application. Here, we analyze the differences of the three approaches in terms of hydro-meteorological responses in the Swiss pre-Alps in terms of mean values as well as indices of extremes. The comparison of the three different approaches was carried out in the frame of a hydrological impact assessment study that focused on different runoff characteristics and their related meteorological indices in the meso-scale catchment of the river Thur ( 1700 km2), Switzerland. For this purpose, we set up the hydrological model WaSiM-ETH under present (1980-2009) and under future conditions (2070-2099), assuming the SRES A1B emission scenario. Input to the three downscaling approaches were 10 GCM-RCM simulations of the ENSEMBLES project, while eight meteorological station observations served as the reference. All station data, observed and downscaled, were interpolated to obtain meteorological fields of temperature and precipitation required by the hydrological model. For the present-day reference period we evaluated the ability of each downscaling method to reproduce today's hydro-meteorological patterns. In the scenario runs, we focused on the comparison of change signals for each hydro-meteorological parameter generated by the three downscaling techniques. The evaluation exercise reveals that QM and WG perform equally well in representing present day average conditions, but that QM outperforms WG in reproducing indices related to extreme conditions like the number of drought events or multi-day rain sums. In terms of mean monthly discharge changes, the three downscaling methods reveal notable differences: DC shows the strongest (in summer) and less pronounced (in winter) change signal. Regarding some extreme features of runoff like frequency of droughts and the low flow level, DC shows similar change signals compared to QM and WG. This was unexpected as DC is commonly reported to fail in terms of projecting extreme changes. In contrast, QM mostly shows the strongest change signals for the 10 different extreme related indices, due to its ability to pick up more features of the climate change signals from the RCM. This indicates that DC and also WG miss some aspects, especially for flood related indices. Hence, depending on the target variable of interest, DC and QM typically provide the full range of change signals, while WG mostly lies in between both method. However, it offers the great advantage of multiple realizations combined with inter-variable consistency.
Integrative prescreening in analysis of multiple cancer genomic studies
2012-01-01
Background In high throughput cancer genomic studies, results from the analysis of single datasets often suffer from a lack of reproducibility because of small sample sizes. Integrative analysis can effectively pool and analyze multiple datasets and provides a cost effective way to improve reproducibility. In integrative analysis, simultaneously analyzing all genes profiled may incur high computational cost. A computationally affordable remedy is prescreening, which fits marginal models, can be conducted in a parallel manner, and has low computational cost. Results An integrative prescreening approach is developed for the analysis of multiple cancer genomic datasets. Simulation shows that the proposed integrative prescreening has better performance than alternatives, particularly including prescreening with individual datasets, an intensity approach and meta-analysis. We also analyze multiple microarray gene profiling studies on liver and pancreatic cancers using the proposed approach. Conclusions The proposed integrative prescreening provides an effective way to reduce the dimensionality in cancer genomic studies. It can be coupled with existing analysis methods to identify cancer markers. PMID:22799431
Jenke, Dennis; Couch, Thomas R; Robinson, Sarah J; Volz, Trent J; Colton, Raymond H
2014-01-01
Extracts of plastic packaging, manufacturing, and delivery systems (or their materials of construction) are analyzed by chromatographic methods to establish the system's extractables profile. The testing strategy consists of multiple orthogonal chromatographic methods, for example, gas and liquid chromatography with multiple detection strategies. Although this orthogonal testing strategy is comprehensive, it is not necessarily complete and members of the extractables profile can elude detection and/or accurate identification/quantification. Because the chromatographic methods rarely indicate that some extractables have been missed, another means of assessing the completeness of the profiling activity must be established. If the extracts are aqueous and contain no organic additives (e.g., pH buffers), then they can be analyzed for their total organic carbon content (TOC). Additionally, the TOC of an extract can be calculated based on the extractables revealed by the screening analyses. The measured and calculated TOC can be reconciled to establish the completeness and accuracy of the extractables profile. If the reconciliation is poor, then the profile is either incomplete or inaccurate and additional testing is needed to establish the complete and accurate profile. Ten test materials and components of systems were extracted and their extracts characterized for organic extractables using typical screening procedures. Measured and calculated TOC was reconciled to establish the completeness of the revealed extractables profile. When the TOC reconciliation was incomplete, the profiling was augmented with additional analytical testing to reveal the missing members of the organic extractables profile. This process is illustrated via two case studies involving aqueous extracts of sterile filters. Plastic materials and systems used to manufacture, contain, store, and deliver pharmaceutical products are extracted and the extracts analyzed to establish the materials' (or systems') organic extractables profile. Such testing typically consists of multiple chromatographic approaches whose differences help to ensure that all organic extractables are revealed, measured, and identified. Nevertheless, this rigorous screening process is not infallible and certain organic extractables may elude detection. If the extraction medium is aqueous, the process of total organic carbon (TOC) reconciliation is proposed as a means of establishing when some organic extractables elude detection. In the reconciliation, the TOC of the extracts is both directly measured and calculated from the chromatographic data. The measured and calculated TOC is compared (or reconciled), and the degree of reconciliation is an indication of the completeness and accuracy of the organic extractables profiling. If the reconciliation is poor, then the extractables profile is either incomplete or inaccurate and additional testing must be performed to establish the complete and accurate profile. This article demonstrates the TOC reconciliation process by considering aqueous extracts of 10 different test articles. Incomplete reconciliations were augmented with additional testing to produce a more complete TOC reconciliation. © PDA, Inc. 2014.
An integrate-over-temperature approach for enhanced sampling.
Gao, Yi Qin
2008-02-14
A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.
Finite-temperature time-dependent variation with multiple Davydov states
NASA Astrophysics Data System (ADS)
Wang, Lu; Fujihashi, Yuta; Chen, Lipeng; Zhao, Yang
2017-03-01
The Dirac-Frenkel time-dependent variational approach with Davydov Ansätze is a sophisticated, yet efficient technique to obtain an accurate solution to many-body Schrödinger equations for energy and charge transfer dynamics in molecular aggregates and light-harvesting complexes. We extend this variational approach to finite temperature dynamics of the spin-boson model by adopting a Monte Carlo importance sampling method. In order to demonstrate the applicability of this approach, we compare calculated real-time quantum dynamics of the spin-boson model with that from numerically exact iterative quasiadiabatic propagator path integral (QUAPI) technique. The comparison shows that our variational approach with the single Davydov Ansätze is in excellent agreement with the QUAPI method at high temperatures, while the two differ at low temperatures. Accuracy in dynamics calculations employing a multitude of Davydov trial states is found to improve substantially over the single Davydov Ansatz, especially at low temperatures. At a moderate computational cost, our variational approach with the multiple Davydov Ansatz is shown to provide accurate spin-boson dynamics over a wide range of temperatures and bath spectral densities.
The application of multiple intelligence approach to the learning of human circulatory system
NASA Astrophysics Data System (ADS)
Kumalasari, Lita; Yusuf Hilmi, A.; Priyandoko, Didik
2017-11-01
The purpose of this study is to offer an alternative teaching approach or strategies which able to accommodate students’ different ability, intelligence and learning style. Also can gives a new idea for the teacher as a facilitator for exploring how to teach the student in creative ways and more student-center activities, for a lesson such as circulatory system. This study was carried out at one private school in Bandung involved eight students to see their responses toward the lesson that delivered by using Multiple Intelligence approach which is include Linguistic, Logical-Mathematical, Visual-Spatial, Musical, Bodily-Kinesthetic, Interpersonal, Intrapersonal, and Naturalistic. Students were test by using MI test based on Howard Gardner’s MI model to see their dominant intelligence. The result showed the percentage of top three ranks of intelligence are Bodily-Kinesthetic (73%), Visual-Spatial (68%), and Logical-Mathematical (61%). The learning process is given by using some different multimedia and activities to engaged their learning style and intelligence such as mini experiment, short clip, and questions. Student response is given by using self-assessment and the result is all students said the lesson gives them a knowledge and skills that useful for their life, they are clear with the explanation given, they didn’t find difficulties to understand the lesson and can complete the assignment given. At the end of the study, it is reveal that the students who are learned by Multiple Intelligence instructional approach have more enhance to the lesson given. It’s also found out that the students participated in the learning process which Multiple Intelligence approach was applied enjoyed the activities and have great fun.
Regnerus, Mark
2017-09-01
The study of stigma's influence on health has surged in recent years. Hatzenbuehler et al.'s (2014) study of structural stigma's effect on mortality revealed an average of 12 years' shorter life expectancy for sexual minorities who resided in communities thought to exhibit high levels of anti-gay prejudice, using data from the 1988-2002 administrations of the US General Social Survey linked to mortality outcome data in the 2008 National Death Index. In the original study, the key predictor variable (structural stigma) led to results suggesting the profound negative influence of structural stigma on the mortality of sexual minorities. Attempts to replicate the study, in order to explore alternative hypotheses, repeatedly failed to generate the original study's key finding on structural stigma. Efforts to discern the source of the disparity in results revealed complications in the multiple imputation process for missing values of the components of structural stigma. This prompted efforts at replication using 10 different imputation approaches. Efforts to replicate Hatzenbuehler et al.'s (2014) key finding on structural stigma's notable influence on the premature mortality of sexual minorities, including a more refined imputation strategy than described in the original study, failed. No data imputation approach yielded parameters that supported the original study's conclusions. Alternative hypotheses, which originally motivated the present study, revealed little new information. Ten different approaches to multiple imputation of missing data yielded none in which the effect of structural stigma on the mortality of sexual minorities was statistically significant. Minimally, the original study's structural stigma variable (and hence its key result) is so sensitive to subjective measurement decisions as to be rendered unreliable. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
COMPACT, CONTINUOUS MONITORING FOR VOLATILE ORGANIC COMPOUNDS - PHASE I
Improved methods for onsite measurement of multiple volatile organic compounds are needed for process control, monitoring, and remediation. This Phase I SBIR project sets forth an optical measurement method that meets these needs. The proposed approach provides an instantaneous m...
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
[Physical rehabilitation in multiple sclerosis: general principles and high-tech approaches].
Peresedova, A V; Chernikova, L A; Zavalishin, I A
2013-01-01
In a chronic and disabling disease like multiple sclerosis, rehabilitation programs are of major importance for the preservation of physical, physiological, social and professional functioning and improvement of quality of life. Currently, it is generally assumed that physical activity is an important component of non-pharmacological rehabilitation in multiple sclerosis. Properly organized exercise is a safe and efficient way to induce improvements in a number of physiological functions. A multidisciplinary rehabilitative approach should be recommended. The main recommendations for the use of exercise for patients with multiple sclerosis have been listed. An important aspect of the modern physical rehabilitation in multiple sclerosis is the usage of high-tech methods. The published results of robot-assisted training to improve the hand function and walking impairment have been represented. An important trend in the rehabilitation of patients with multiple sclerosis is the reduction of postural disorders through training balance coordination. The role of transcranial magnetic stimulation in spasticity reducing is being investigated. The use of telemedicine capabilities is quite promising. Due to the fact that the decline in physical activity can lead to the deterioration of many aspects of physiological functions and, ultimately, to mobility decrease, further research of the role of physical rehabilitation as an important therapeutic approach in preventing the progression of disability in multiple sclerosis is required.
Quantitative multi-target RNA profiling in Epstein-Barr virus infected tumor cells.
Greijer, A E; Ramayanti, O; Verkuijlen, S A W M; Novalić, Z; Juwana, H; Middeldorp, J M
2017-03-01
Epstein-Barr virus (EBV) is etiologically linked to multiple acute, chronic and malignant diseases. Detection of EBV-RNA transcripts in tissues or biofluids besides EBV-DNA can help in diagnosing EBV related syndromes. Sensitive EBV transcription profiling yields new insights on its pathogenic role and may be useful for monitoring virus targeted therapy. Here we describe a multi-gene quantitative RT-PCR profiling method that simultaneously detects a broad spectrum (n=16) of crucial latent and lytic EBV transcripts. These transcripts include (but are not restricted to), EBNA1, EBNA2, LMP1, LMP2, BARTs, EBER1, BARF1 and ZEBRA, Rta, BGLF4 (PK), BXLF1 (TK) and BFRF3 (VCAp18) all of which have been implicated in EBV-driven oncogenesis and viral replication. With this method we determine the amount of RNA copies per infected (tumor) cell in bulk populations of various origin. While we confirm the expected RNA profiles within classic EBV latency programs, this sensitive quantitative approach revealed the presence of rare cells undergoing lytic replication. Inducing lytic replication in EBV tumor cells supports apoptosis and is considered as therapeutic approach to treat EBV-driven malignancies. This sensitive multi-primed quantitative RT-PCR approach can provide broader understanding of transcriptional activity in latent and lytic EBV infection and is suitable for monitoring virus-specific therapy responses in patients with EBV associated cancers. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Dissociating functional brain networks by decoding the between-subject variability
Seghier, Mohamed L.; Price, Cathy J.
2009-01-01
In this study we illustrate how the functional networks involved in a single task (e.g. the sensory, cognitive and motor components) can be segregated without cognitive subtractions at the second-level. The method used is based on meaningful variability in the patterns of activation between subjects with the assumption that regions belonging to the same network will have comparable variations from subject to subject. fMRI data were collected from thirty nine healthy volunteers who were asked to indicate with a button press if visually presented words were semantically related or not. Voxels were classified according to the similarity in their patterns of between-subject variance using a second-level unsupervised fuzzy clustering algorithm. The results were compared to those identified by cognitive subtractions of multiple conditions tested in the same set of subjects. This illustrated that the second-level clustering approach (on activation for a single task) was able to identify the functional networks observed using cognitive subtractions (e.g. those associated with vision, semantic associations or motor processing). In addition the fuzzy clustering approach revealed other networks that were not dissociated by the cognitive subtraction approach (e.g. those associated with high- and low-level visual processing and oculomotor movements). We discuss the potential applications of our method which include the identification of “hidden” or unpredicted networks as well as the identification of systems level signatures for different subgroupings of clinical and healthy populations. PMID:19150501
Multiple object tracking using the shortest path faster association algorithm.
Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.
Multiple Object Tracking Using the Shortest Path Faster Association Algorithm
Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322
Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.
Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J
2018-06-01
Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.
Standardised Library Instruction Assessment: An Institution-Specific Approach
ERIC Educational Resources Information Center
Staley, Shannon M.; Branch, Nicole A.; Hewitt, Tom L.
2010-01-01
Introduction: We explore the use of a psychometric model for locally-relevant, information literacy assessment, using an online tool for standardised assessment of student learning during discipline-based library instruction sessions. Method: A quantitative approach to data collection and analysis was used, employing standardised multiple-choice…
NASA Astrophysics Data System (ADS)
Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas
2018-04-01
Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.
Attenuation tomography of the main volcanic regions of the Campanian Plain.
NASA Astrophysics Data System (ADS)
de Siena, Luca; Del Pezzo, Edoardo; Bianco, Francesca
2010-05-01
Passive, high resolution attenuation tomography is used to image the geological structure in the first upper 4 km of shallow crust beneath the Campanian Plain. Images were produced by two separate attenuation tomography studies of the main volcanic regions of the Campanian Plain, Southern Italy, Mt. Vesuvius volcano and Campi Flegrei caldera. The three-dimensional S wave attenuation tomography of Mt. Vesuvius has been obtained with multiple measurements of coda-normalized S-wave spectra of local small magnitude earthquakes. P-wave attenuation tomography was performed using classical spectral methods. The images were obtained inverting the spectral data with a multiple resolution approach expressively designed for attenuation tomography. This allowed to obtain a robust attenuation image of the volumes under the central cone at a maximum resolution of 300 m. The same approach was applied to a data set recorded in the Campi Flegrei area during the 1982-1984 seismic crisis. Inversion ensures a minimum cell size resolution of 500 meters in the zones with sufficient ray coverage, and 1000 meters outside these zones. The study of the resolution matrix as well as the synthetic tests guarantee an optimal reproduction of the input anomalies in the center of the caldera, between 0 and 3.5 km in depth. Results allowed an unprecedented view of several features of the medium, like the residual part of solidified magma from the last eruption, under the central cone of Mt. Vesuvius, and the feeding systems and top of the carbonate basement, 3 km depth below both volcanic areas. Vertical Q contrast image important fault zones, such as the La Starza fault, as well as high attenuation structures that correspond to gas or fluid reservoirs, and reveal the upper part of gas bearing conduits connecting these high attenuation volumes with the magma sill revealed at about 7 km in depth by passive travel-time tomography under the whole Campanian Plain.
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Chang, Sin-Chung; Yu, Sheng-Tao; Wang, Xiao-Yen; Loh, Ching-Yuen; Jorgenson, Philip C. E.
1999-01-01
In this overview paper, we review the basic principles of the method of space-time conservation element and solution element for solving the conservation laws in one and two spatial dimensions. The present method is developed on the basis of local and global flux conservation in a space-time domain, in which space and time are treated in a unified manner. In contrast to the modern upwind schemes, the approach here does not use the Riemann solver and the reconstruction procedure as the building blocks. The drawbacks of the upwind approach, such as the difficulty of rationally extending the 1D scalar approach to systems of equations and particularly to multiple dimensions is here contrasted with the uniformity and ease of generalization of the Conservation Element and Solution Element (CE/SE) 1D scalar schemes to systems of equations and to multiple spatial dimensions. The assured compatibility with the simplest type of unstructured meshes, and the uniquely simple nonreflecting boundary conditions of the present method are also discussed. The present approach has yielded high-resolution shocks, rarefaction waves, acoustic waves, vortices, ZND detonation waves, and shock/acoustic waves/vortices interactions. Moreover, since no directional splitting is employed, numerical resolution of two-dimensional calculations is comparable to that of the one-dimensional calculations. Some sample applications displaying the strengths and broad applicability of the CE/SE method are reviewed.
Jiao, S; Tiezzi, F; Huang, Y; Gray, K A; Maltecca, C
2016-02-01
Obtaining accurate individual feed intake records is the key first step in achieving genetic progress toward more efficient nutrient utilization in pigs. Feed intake records collected by electronic feeding systems contain errors (erroneous and abnormal values exceeding certain cutoff criteria), which are due to feeder malfunction or animal-feeder interaction. In this study, we examined the use of a novel data-editing strategy involving multiple imputation to minimize the impact of errors and missing values on the quality of feed intake data collected by an electronic feeding system. Accuracy of feed intake data adjustment obtained from the conventional linear mixed model (LMM) approach was compared with 2 alternative implementations of multiple imputation by chained equation, denoted as MI (multiple imputation) and MICE (multiple imputation by chained equation). The 3 methods were compared under 3 scenarios, where 5, 10, and 20% feed intake error rates were simulated. Each of the scenarios was replicated 5 times. Accuracy of the alternative error adjustment was measured as the correlation between the true daily feed intake (DFI; daily feed intake in the testing period) or true ADFI (the mean DFI across testing period) and the adjusted DFI or adjusted ADFI. In the editing process, error cutoff criteria are used to define if a feed intake visit contains errors. To investigate the possibility that the error cutoff criteria may affect any of the 3 methods, the simulation was repeated with 2 alternative error cutoff values. Multiple imputation methods outperformed the LMM approach in all scenarios with mean accuracies of 96.7, 93.5, and 90.2% obtained with MI and 96.8, 94.4, and 90.1% obtained with MICE compared with 91.0, 82.6, and 68.7% using LMM for DFI. Similar results were obtained for ADFI. Furthermore, multiple imputation methods consistently performed better than LMM regardless of the cutoff criteria applied to define errors. In conclusion, multiple imputation is proposed as a more accurate and flexible method for error adjustments in feed intake data collected by electronic feeders.
The semantics of pain in Greco-Roman antiquity.
Wilson, Nicole
2013-01-01
The semantics of pain are an important and interesting aspect of any language. Ancient Greek and Latin had multiple words for pain, which makes scrutinizing different meanings problematic. The ancient physician Galen approached this issue through the use of adjectives to describe the qualities for pain, instead of the words for pain themselves. The medical texts of Celsus and Caelius Aurelianus reveal that Latin also vested particular significance in qualifiers to distinguish between different types of pain. This article looks at the qualifying terms used for pain in the ancient Greek and Latin languages to reveal a sophisticated Greco-Roman vocabulary for pain.
A powerful approach reveals numerous expression quantitative trait haplotypes in multiple tissues.
Ying, Dingge; Li, Mulin Jun; Sham, Pak Chung; Li, Miaoxin
2018-04-26
Recently many studies showed single nucleotide polymorphisms (SNPs) affect gene expression and contribute to development of complex traits/diseases in a tissue context-dependent manner. However, little is known about haplotype's influence on gene expression and complex traits, which reflects the interaction effect between SNPs. In the present study, we firstly proposed a regulatory region guided eQTL haplotype association analysis approach, and then systematically investigate the expression quantitative trait loci (eQTL) haplotypes in 20 different tissues by the approach. The approach has a powerful design of reducing computational burden by the utilization of regulatory predictions for candidate SNP selection and multiple testing corrections on non-independent haplotypes. The application results in multiple tissues showed that haplotype-based eQTLs not only increased the number of eQTL genes in a tissue specific manner, but were also enriched in loci that associated with complex traits in a tissue-matched manner. In addition, we found that tag SNPs of eQTL haplotypes from whole blood were selectively enriched in certain combination of regulatory elements (e.g. promoters and enhancers) according to predicted chromatin states. In summary, this eQTL haplotype detection approach, together with the application results, shed insights into synergistic effect of sequence variants on gene expression and their susceptibility to complex diseases. The executable application "eHaplo" is implemented in Java and is publicly available at http://grass.cgs.hku.hk/limx/ehaplo/. jonsonfox@gmail.com, limiaoxin@mail.sysu.edu.cn. Supplementary data are available at Bioinformatics online.
An approach for fixed coefficient RNS-based FIR filter
NASA Astrophysics Data System (ADS)
Srinivasa Reddy, Kotha; Sahoo, Subhendu Kumar
2017-08-01
In this work, an efficient new modular multiplication method for {2k-1, 2k, 2k+1-1} moduli set is proposed to implement a residue number system (RNS)-based fixed coefficient finite impulse response filter. The new multiplication approach reduces the number of partial products by using pre-loaded product block. The reduction in partial products with the proposed modular multiplication improves the clock frequency and reduces the area and power as compared with the conventional modular multiplication. Further, the present approach eliminates a binary number to residue number converter circuit, which is usually needed at the front end of RNS-based system. In this work, two fixed coefficient filter architectures with the new modular multiplication approach are proposed. The filters are implemented using Verilog hardware description language. The United Microelectronics Corporation 90 nm technology library has been used for synthesis and the results area, power and delay are obtained with the help of Cadence register transfer level compiler. The power delay product (PDP) is also considered for performance comparison among the proposed filters. One of the proposed architecture is found to improve PDP gain by 60.83% as compared with the filter implemented with conventional modular multiplier. The filters functionality is validated with the help of Altera DSP Builder.
Fused-data transrectal EIT for prostate cancer imaging.
Murphy, Ethan K; Wu, Xiaotian; Halter, Ryan J
2018-05-25
Prostate cancer is a significant problem affecting 1 in 7 men. Unfortunately, the diagnostic gold-standard of ultrasound-guided biopsy misses 10%-30% of all cancers. The objective of this study was to develop an electrical impedance tomography (EIT) approach that has the potential to image the entire prostate using multiple impedance measurements recorded between electrodes integrated onto an end-fired transrectal ultrasound (TRUS) device and a biopsy probe (BP). Simulations and sensitivity analyses were used to investigate the best combination of electrodes, and measured tank experiments were used to evaluate a fused-data transrectal EIT (fd-TREIT) and BP approach. Simulations and sensitivity analysis revealed that (1) TREIT measurements are not sufficiently sensitive to image the whole prostate, (2) the combination of TREIT + BP measurements increases the sensitive region of TREIT-only measurements by 12×, and (3) the fusion of multiple TREIT + BP measurements collected during a routine or customized 12-core biopsy procedure can cover up to 76.1% or 94.1% of a nominal 50 cm 3 prostate, respectively. Three measured tank experiments of the fd-TREIT + BP approach successfully and accurately recovered the positions of 2-3 metal or plastic inclusions. The measured tank experiments represent important steps in the development of an algorithm that can combine EIT from multiple locations and from multiple probes-data that could be collected during a routine TRUS-guided 12-core biopsy. Overall, this result is a step towards a clinically deployable impedance imaging approach to scanning the entire prostate, which could significantly help to improve prostate cancer diagnosis.
2011-01-01
Background The avian family Cettiidae, including the genera Cettia, Urosphena, Tesia, Abroscopus and Tickellia and Orthotomus cucullatus, has recently been proposed based on analysis of a small number of loci and species. The close relationship of most of these taxa was unexpected, and called for a comprehensive study based on multiple loci and dense taxon sampling. In the present study, we infer the relationships of all except one of the species in this family using one mitochondrial and three nuclear loci. We use traditional gene tree methods (Bayesian inference, maximum likelihood bootstrapping, parsimony bootstrapping), as well as a recently developed Bayesian species tree approach (*BEAST) that accounts for lineage sorting processes that might produce discordance between gene trees. We also analyse mitochondrial DNA for a larger sample, comprising multiple individuals and a large number of subspecies of polytypic species. Results There are many topological incongruences among the single-locus trees, although none of these is strongly supported. The multi-locus tree inferred using concatenated sequences and the species tree agree well with each other, and are overall well resolved and well supported by the data. The main discrepancy between these trees concerns the most basal split. Both methods infer the genus Cettia to be highly non-monophyletic, as it is scattered across the entire family tree. Deep intraspecific divergences are revealed, and one or two species and one subspecies are inferred to be non-monophyletic (differences between methods). Conclusions The molecular phylogeny presented here is strongly inconsistent with the traditional, morphology-based classification. The remarkably high degree of non-monophyly in the genus Cettia is likely to be one of the most extraordinary examples of misconceived relationships in an avian genus. The phylogeny suggests instances of parallel evolution, as well as highly unequal rates of morphological divergence in different lineages. This complex morphological evolution apparently misled earlier taxonomists. These results underscore the well-known but still often neglected problem of basing classifications on overall morphological similarity. Based on the molecular data, a revised taxonomy is proposed. Although the traditional and species tree methods inferred much the same tree in the present study, the assumption by species tree methods that all species are monophyletic is a limitation in these methods, as some currently recognized species might have more complex histories. PMID:22142197
NASA Astrophysics Data System (ADS)
Medina, Tait Runnfeldt
The increasing global reach of survey research provides sociologists with new opportunities to pursue theory building and refinement through comparative analysis. However, comparison across a broad array of diverse contexts introduces methodological complexities related to the development of constructs (i.e., measurement modeling) that if not adequately recognized and properly addressed undermine the quality of research findings and cast doubt on the validity of substantive conclusions. The motivation for this dissertation arises from a concern that the availability of cross-national survey data has outpaced sociologists' ability to appropriately analyze and draw meaningful conclusions from such data. I examine the implicit assumptions and detail the limitations of three commonly used measurement models in cross-national analysis---summative scale, pooled factor model, and multiple-group factor model with measurement invariance. Using the orienting lens of the double tension I argue that a new approach to measurement modeling that incorporates important cross-national differences into the measurement process is needed. Two such measurement models---multiple-group factor model with partial measurement invariance (Byrne, Shavelson and Muthen 1989) and the alignment method (Asparouhov and Muthen 2014; Muthen and Asparouhov 2014)---are discussed in detail and illustrated using a sociologically relevant substantive example. I demonstrate that the former approach is vulnerable to an identification problem that arbitrarily impacts substantive conclusions. I conclude that the alignment method is built on model assumptions that are consistent with theoretical understandings of cross-national comparability and provides an approach to measurement modeling and construct development that is uniquely suited for cross-national research. The dissertation makes three major contributions: First, it provides theoretical justification for a new cross-national measurement model and explicates a link between theoretical conceptions of cross-national comparability and a statistical method. Second, it provides a clear and detailed discussion of model identification in multiple-group confirmatory factor analysis that is missing from the literature. This discussion sets the stage for the introduction of the identification problem within multiple-group confirmatory factor analysis with partial measurement invariance and the alternative approach to model identification employed by the alignment method. Third, it offers the first pedagogical presentation of the alignment method using a sociologically relevant example.
Multiplicative noise removal via a learned dictionary.
Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong
2012-11-01
Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.
Massive Photons: An Infrared Regularization Scheme for Lattice QCD+QED.
Endres, Michael G; Shindler, Andrea; Tiburzi, Brian C; Walker-Loud, André
2016-08-12
Standard methods for including electromagnetic interactions in lattice quantum chromodynamics calculations result in power-law finite-volume corrections to physical quantities. Removing these by extrapolation requires costly computations at multiple volumes. We introduce a photon mass to alternatively regulate the infrared, and rely on effective field theory to remove its unphysical effects. Electromagnetic modifications to the hadron spectrum are reliably estimated with a precision and cost comparable to conventional approaches that utilize multiple larger volumes. A significant overall cost advantage emerges when accounting for ensemble generation. The proposed method may benefit lattice calculations involving multiple charged hadrons, as well as quantum many-body computations with long-range Coulomb interactions.
Monitoring of self-healing composites: a nonlinear ultrasound approach
NASA Astrophysics Data System (ADS)
Malfense Fierro, Gian-Piero; Pinto, Fulvio; Dello Iacono, Stefania; Martone, Alfonso; Amendola, Eugenio; Meo, Michele
2017-11-01
Self-healing composites using a thermally mendable polymer, based on Diels-Alder reaction were fabricated and subjected to various multiple damage loads. Unlike traditional destructive methods, this work presents a nonlinear ultrasound technique to evaluate the structural recovery of the proposed self-healing laminate structures. The results were compared to computer tomography and linear ultrasound methods. The laminates were subjected to multiple loading and healing cycles and the induced damage and recovery at each stage was evaluated. The results highlight the benefit and added advantage of using a nonlinear based methodology to monitor the structural recovery of reversibly cross-linked epoxy with efficient recycling and multiple self-healing capability.
NASA Astrophysics Data System (ADS)
Sabri, Karim; Colson, Gérard E.; Mbangala, Augustin M.
2008-10-01
Multi-period differences of technical and financial performances are analysed by comparing five North African railways over the period (1990-2004). A first approach is based on the Malmquist DEA TFP index for measuring the total factors productivity change, decomposed into technical efficiency change and technological changes. A multiple criteria analysis is also performed using the PROMETHEE II method and the software ARGOS. These methods provide complementary detailed information, especially by discriminating the technological and management progresses by Malmquist and the two dimensions of performance by Promethee: that are the service to the community and the enterprises performances, often in conflict.
Zhang, Shenyan; Wen, Bo; Zhou, Baojin; Yang, Lei; Cha, Chao; Xu, Shaoxing; Qiu, Xuemei; Wang, Quanhui; Sun, Haidan; Lou, Xiaomin; Zi, Jin; Zhang, Yong; Lin, Liang; Liu, Siqi
2013-05-03
Members of human aldo-keto reductase (AKR) superfamily have been reported to be involved in cancer progression, whereas the final conclusion is not generally accepted. Herein, we propose a quantitative method to measure human AKR proteins in cells using mTRAQ-based multiple reaction monitoring (MRM). AKR peptides with multiple transitions were carefully selected upon tryptic digestion of the recombinant AKR proteins, while AKR proteins were identified by SDS-PAGE fractionation coupled with LC-MS/MS. Utilizing mTRAQ triplex labeling to produce the derivative peptides, calibration curves were generated using the mixed lysate as background, and no significantly different quantification of AKRs was elicited from the two sets of calibration curves under the mixed and single lysate as background. We employed this approach to quantitatively determine the 6 AKR proteins, AKR1A1, AKR1B1, AKR1B10, AKR1C1/C2, AKR1C3, and AKR1C4, in 7 different cancer cell lines and for the first time to obtain the absolute quantities of all the AKR proteins in each cell. The cluster plot revealed that AKR1A and AKR1B were widely distributed in most cancer cells with relatively stable abundances, whereas AKR1Cs were unevenly detected among these cells with diverse dynamic abundances. The AKR quantitative distribution in different cancer cells, therefore, may assist further exploration toward how the AKR proteins are involved in tumorigenesis.
Global resilience analysis of water distribution systems.
Diao, Kegong; Sweetapple, Chris; Farmani, Raziyeh; Fu, Guangtao; Ward, Sarah; Butler, David
2016-12-01
Evaluating and enhancing resilience in water infrastructure is a crucial step towards more sustainable urban water management. As a prerequisite to enhancing resilience, a detailed understanding is required of the inherent resilience of the underlying system. Differing from traditional risk analysis, here we propose a global resilience analysis (GRA) approach that shifts the objective from analysing multiple and unknown threats to analysing the more identifiable and measurable system responses to extreme conditions, i.e. potential failure modes. GRA aims to evaluate a system's resilience to a possible failure mode regardless of the causal threat(s) (known or unknown, external or internal). The method is applied to test the resilience of four water distribution systems (WDSs) with various features to three typical failure modes (pipe failure, excess demand, and substance intrusion). The study reveals GRA provides an overview of a water system's resilience to various failure modes. For each failure mode, it identifies the range of corresponding failure impacts and reveals extreme scenarios (e.g. the complete loss of water supply with only 5% pipe failure, or still meeting 80% of demand despite over 70% of pipes failing). GRA also reveals that increased resilience to one failure mode may decrease resilience to another and increasing system capacity may delay the system's recovery in some situations. It is also shown that selecting an appropriate level of detail for hydraulic models is of great importance in resilience analysis. The method can be used as a comprehensive diagnostic framework to evaluate a range of interventions for improving system resilience in future studies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Experimental single-strain mobilomics reveals events that shape pathogen emergence
Schoeniger, Joseph S.; Hudson, Corey M.; Bent, Zachary W.; ...
2016-07-04
Virulence and resistance genes carried on mobile DNAs such as genomic islands (GIs) and plasmids promote bacterial pathogen emergence. An early step in the mobilization of GIs is their excision, which produces both a circular form of the GI and a deletion site in the chromosome; circular forms have also been described for some bacterial insertion sequences (ISs). We demonstrate that the recombinant sequence produced at the junction of such circles, and their corresponding deletion sites, can be detected sensitively in high throughput sequencing data, using new computational methods that enable empirical discovery of new mobile DNAs. Applied to themore » rich mobilome of a single strain (Kpn2146) of the emerging multidrug-resistant pathogen Klebsiella pneumoniae, our approach detected circular junctions for six GIs and seven IS types (several of the latter not previously known to circularize). Our methods further revealed differential biology of multiple mobile DNAs, imprecision of integrases and transposases, and differential activity among identical IS copies for IS26, ISKpn18 and ISKpn21. Exonuclease was used to enrich for circular dsDNA molecules, and internal calibration with the native Kpn2146 plasmids showed that not all molecules bearing GI and IS circular junctions were circular dsDNAs. Transposition events were also detected, revealing replicon preference (ISKpn18 preferring a conjugative IncA/C2 plasmid), local action (IS26), regional preferences, selection (against capsule synthesis), and left-right IS end swapping. Efficient discovery and global characterization of numerous mobile elements per experiment will allow detailed accounting of bacterial evolution, explaining the new gene combinations that arise in emerging pathogens.« less
NASA Astrophysics Data System (ADS)
Lan, Chunbo; Tang, Lihua; Qin, Weiyang
2017-07-01
Nonlinear energy harvesters have attracted wide research attentions to achieve broadband performances in recent years. Nonlinear structures have multiple solutions in certain frequency region that contains high-energy and low-energy orbits. It is effectively the frequency region of capturing a high-energy orbit that determines the broadband performance. Thus, maintaining large-amplitude high-energy-orbit oscillations is highly desired. In this paper, a voltage impulse perturbation approach based on negative resistance is applied to trigger high-energy-orbit responses of piezoelectric nonlinear energy harvesters. First, the mechanism of the voltage impulse perturbation and the implementation of the synthetic negative resistance circuit are discussed in detail. Subsequently, numerical simulation and experiment are conducted and the results demonstrate that the high-energy-orbit oscillations can be triggered by the voltage impulse perturbation method for both monostable and bistable configurations given various scenarios. It is revealed that the perturbation levels required to trigger and maintain high-energy-orbit oscillations are different for various excitation frequencies in the region where multiple solutions exist. The higher gain in voltage output when high-energy-orbit oscillations are captured is accompanied with the demand of a higher voltage impulse perturbation level.
Multiplexed Sequence Encoding: A Framework for DNA Communication
Zakeri, Bijan; Carr, Peter A.; Lu, Timothy K.
2016-01-01
Synthetic DNA has great propensity for efficiently and stably storing non-biological information. With DNA writing and reading technologies rapidly advancing, new applications for synthetic DNA are emerging in data storage and communication. Traditionally, DNA communication has focused on the encoding and transfer of complete sets of information. Here, we explore the use of DNA for the communication of short messages that are fragmented across multiple distinct DNA molecules. We identified three pivotal points in a communication—data encoding, data transfer & data extraction—and developed novel tools to enable communication via molecules of DNA. To address data encoding, we designed DNA-based individualized keyboards (iKeys) to convert plaintext into DNA, while reducing the occurrence of DNA homopolymers to improve synthesis and sequencing processes. To address data transfer, we implemented a secret-sharing system—Multiplexed Sequence Encoding (MuSE)—that conceals messages between multiple distinct DNA molecules, requiring a combination key to reveal messages. To address data extraction, we achieved the first instance of chromatogram patterning through multiplexed sequencing, thereby enabling a new method for data extraction. We envision these approaches will enable more widespread communication of information via DNA. PMID:27050646
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi
2018-02-01
The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.
Managing for resilience: an information theory-based approach to assessing ecosystems
Ecosystems are complex and multivariate; hence, methods to assess the dynamics of ecosystems should have the capacity to evaluate multiple indicators simultaneously. Most research on identifying leading indicators of regime shifts has focused on univariate methods and simple mod...
Collins, Anne G E; Frank, Michael J
2018-03-06
Learning from rewards and punishments is essential to survival and facilitates flexible human behavior. It is widely appreciated that multiple cognitive and reinforcement learning systems contribute to decision-making, but the nature of their interactions is elusive. Here, we leverage methods for extracting trial-by-trial indices of reinforcement learning (RL) and working memory (WM) in human electro-encephalography to reveal single-trial computations beyond that afforded by behavior alone. Neural dynamics confirmed that increases in neural expectation were predictive of reduced neural surprise in the following feedback period, supporting central tenets of RL models. Within- and cross-trial dynamics revealed a cooperative interplay between systems for learning, in which WM contributes expectations to guide RL, despite competition between systems during choice. Together, these results provide a deeper understanding of how multiple neural systems interact for learning and decision-making and facilitate analysis of their disruption in clinical populations.
Levesque, Danielle L; Menzies, Allyson K; Landry-Cuerrier, Manuelle; Larocque, Guillaume; Humphries, Murray M
2017-07-01
Recent research is revealing incredible diversity in the thermoregulatory patterns of wild and captive endotherms. As a result of these findings, classic thermoregulatory categories of 'homeothermy', 'daily heterothermy', and 'hibernation' are becoming harder to delineate, impeding our understanding of the physiological and evolutionary significance of variation within and around these categories. However, we lack a generalized analytical approach for evaluating and comparing the complex and diversified nature of the full breadth of heterothermy expressed by individuals, populations, and species. Here we propose a new approach that decomposes body temperature time series into three inherent properties-waveform, amplitude, and period-using a non-stationary technique that accommodates the temporal variability of body temperature patterns. This approach quantifies circadian and seasonal variation in thermoregulatory patterns, and uses the distribution of observed thermoregulatory patterns as a basis for intra- and inter-specific comparisons. We analyse body temperature time series from multiple species, including classical hibernators, tropical heterotherms, and homeotherms, to highlight the approach's general usefulness and the major axes of thermoregulatory variation that it reveals.
Voillet, Valentin; Besse, Philippe; Liaubet, Laurence; San Cristobal, Magali; González, Ignacio
2016-10-03
In omics data integration studies, it is common, for a variety of reasons, for some individuals to not be present in all data tables. Missing row values are challenging to deal with because most statistical methods cannot be directly applied to incomplete datasets. To overcome this issue, we propose a multiple imputation (MI) approach in a multivariate framework. In this study, we focus on multiple factor analysis (MFA) as a tool to compare and integrate multiple layers of information. MI involves filling the missing rows with plausible values, resulting in M completed datasets. MFA is then applied to each completed dataset to produce M different configurations (the matrices of coordinates of individuals). Finally, the M configurations are combined to yield a single consensus solution. We assessed the performance of our method, named MI-MFA, on two real omics datasets. Incomplete artificial datasets with different patterns of missingness were created from these data. The MI-MFA results were compared with two other approaches i.e., regularized iterative MFA (RI-MFA) and mean variable imputation (MVI-MFA). For each configuration resulting from these three strategies, the suitability of the solution was determined against the true MFA configuration obtained from the original data and a comprehensive graphical comparison showing how the MI-, RI- or MVI-MFA configurations diverge from the true configuration was produced. Two approaches i.e., confidence ellipses and convex hulls, to visualize and assess the uncertainty due to missing values were also described. We showed how the areas of ellipses and convex hulls increased with the number of missing individuals. A free and easy-to-use code was proposed to implement the MI-MFA method in the R statistical environment. We believe that MI-MFA provides a useful and attractive method for estimating the coordinates of individuals on the first MFA components despite missing rows. MI-MFA configurations were close to the true configuration even when many individuals were missing in several data tables. This method takes into account the uncertainty of MI-MFA configurations induced by the missing rows, thereby allowing the reliability of the results to be evaluated.
The SAGE Model of Social Psychological Research
Power, Séamus A.; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-01-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed. PMID:29361241