ERIC Educational Resources Information Center
Brewe, Eric; Bruun, Jesper; Bearden, Ian G.
2016-01-01
We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…
Fault Tree Analysis Application for Safety and Reliability
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.
40 CFR 79.33 - Motor vehicle diesel fuel.
Code of Federal Regulations, 2010 CFR
2010-07-01
... data may be for such shorter period. (1) Hydrocarbon composition (aromatic content, olefin content, saturate content), with the methods of analysis identified; (2) Polynuclear organic material content, sulfur content, and trace element content, with the methods of analysis identified; (3) Distillation...
Assessing the validity of prospective hazard analysis methods: a comparison of two techniques
2014-01-01
Background Prospective Hazard Analysis techniques such as Healthcare Failure Modes and Effects Analysis (HFMEA) and Structured What If Technique (SWIFT) have the potential to increase safety by identifying risks before an adverse event occurs. Published accounts of their application in healthcare have identified benefits, but the reliability of some methods has been found to be low. The aim of this study was to examine the validity of SWIFT and HFMEA by comparing their outputs in the process of risk assessment, and comparing the results with risks identified by retrospective methods. Methods The setting was a community-based anticoagulation clinic, in which risk assessment activities had been previously performed and were available. A SWIFT and an HFMEA workshop were conducted consecutively on the same day by experienced experts. Participants were a mixture of pharmacists, administrative staff and software developers. Both methods produced lists of risks scored according to the method’s procedure. Participants’ views about the value of the workshops were elicited with a questionnaire. Results SWIFT identified 61 risks and HFMEA identified 72 risks. For both methods less than half the hazards were identified by the other method. There was also little overlap between the results of the workshops and risks identified by prior root cause analysis, staff interviews or clinical governance board discussions. Participants’ feedback indicated that the workshops were viewed as useful. Conclusions Although there was limited overlap, both methods raised important hazards. Scoping the problem area had a considerable influence on the outputs. The opportunity for teams to discuss their work from a risk perspective is valuable, but these methods cannot be relied upon in isolation to provide a comprehensive description. Multiple methods for identifying hazards should be used and data from different sources should be integrated to give a comprehensive view of risk in a system. PMID:24467813
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Yang, Ze-Hui; Zheng, Rui; Gao, Yuan; Zhang, Qiang
2016-09-01
With the widespread application of high-throughput technology, numerous meta-analysis methods have been proposed for differential expression profiling across multiple studies. We identified the suitable differentially expressed (DE) genes that contributed to lung adenocarcinoma (ADC) clustering based on seven popular multiple meta-analysis methods. Seven microarray expression profiles of ADC and normal controls were extracted from the ArrayExpress database. The Bioconductor was used to perform the data preliminary preprocessing. Then, DE genes across multiple studies were identified. Hierarchical clustering was applied to compare the classification performance for microarray data samples. The classification efficiency was compared based on accuracy, sensitivity and specificity. Across seven datasets, 573 ADC cases and 222 normal controls were collected. After filtering out unexpressed and noninformative genes, 3688 genes were remained for further analysis. The classification efficiency analysis showed that DE genes identified by sum of ranks method separated ADC from normal controls with the best accuracy, sensitivity and specificity of 0.953, 0.969 and 0.932, respectively. The gene set with the highest classification accuracy mainly participated in the regulation of response to external stimulus (P = 7.97E-04), cyclic nucleotide-mediated signaling (P = 0.01), regulation of cell morphogenesis (P = 0.01) and regulation of cell proliferation (P = 0.01). Evaluation of DE genes identified by different meta-analysis methods in classification efficiency provided a new perspective to the choice of the suitable method in a given application. Varying meta-analysis methods always present varying abilities, so synthetic consideration should be taken when providing meta-analysis methods for particular research. © 2015 John Wiley & Sons Ltd.
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
A strategy for evaluating pathway analysis methods.
Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques
2017-10-13
Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth, either established or assumed, of the pathways perturbed by a specific clinical or experimental condition. As such, our strategy allows researchers to systematically and objectively evaluate pathway analysis methods by employing any number of datasets for a variety of conditions.
Eastwood, John Graeme; Jalaludin, Bin Badrudin; Kemp, Lynn Ann; Phung, Hai Ngoc
2014-01-01
We have previously reported in this journal on an ecological study of perinatal depressive symptoms in South Western Sydney. In that article, we briefly reported on a factor analysis that was utilized to identify empirical indicators for analysis. In this article, we report on the mixed method approach that was used to identify those latent variables. Social epidemiology has been slow to embrace a latent variable approach to the study of social, political, economic, and cultural structures and mechanisms, partly for philosophical reasons. Critical realist ontology and epistemology have been advocated as an appropriate methodological approach to both theory building and theory testing in the health sciences. We describe here an emergent mixed method approach that uses qualitative methods to identify latent constructs followed by factor analysis using empirical indicators chosen to measure identified qualitative codes. Comparative analysis of the findings is reported together with a limited description of realist approaches to abstract reasoning.
Yan, Yin-zhuo; Qian, Yu-lin; Ji, Feng-di; Chen, Jing-yu; Han, Bei-zhong
2013-05-01
Koji-making is a key process for production of high quality soy sauce. The microbial composition during koji-making was investigated by culture-dependent and culture-independent methods to determine predominant bacterial and fungal populations. The culture-dependent methods used were direct culture and colony morphology observation, and PCR amplification of 16S/26S rDNA fragments followed by sequencing analysis. The culture-independent method was based on the analysis of 16S/26S rDNA clone libraries. There were differences between the results obtained by different methods. However, sufficient overlap existed between the different methods to identify potentially significant microbial groups. 16 and 20 different bacterial species were identified using culture-dependent and culture-independent methods, respectively. 7 species could be identified by both methods. The most predominant bacterial genera were Weissella and Staphylococcus. Both 6 different fungal species were identified using culture-dependent and culture-independent methods, respectively. Only 3 species could be identified by both sets of methods. The most predominant fungi were Aspergillus and Candida species. This work illustrated the importance of a comprehensive polyphasic approach in the analysis of microbial composition during soy sauce koji-making, the knowledge of which will enable further optimization of microbial composition and quality control of koji to upgrade Chinese traditional soy sauce product. Copyright © 2013 Elsevier Ltd. All rights reserved.
Integrative Analysis of Prognosis Data on Multiple Cancer Subtypes
Liu, Jin; Huang, Jian; Zhang, Yawei; Lan, Qing; Rothman, Nathaniel; Zheng, Tongzhang; Ma, Shuangge
2014-01-01
Summary In cancer research, profiling studies have been extensively conducted, searching for genes/SNPs associated with prognosis. Cancer is diverse. Examining the similarity and difference in the genetic basis of multiple subtypes of the same cancer can lead to a better understanding of their connections and distinctions. Classic meta-analysis methods analyze each subtype separately and then compare analysis results across subtypes. Integrative analysis methods, in contrast, analyze the raw data on multiple subtypes simultaneously and can outperform meta-analysis methods. In this study, prognosis data on multiple subtypes of the same cancer are analyzed. An AFT (accelerated failure time) model is adopted to describe survival. The genetic basis of multiple subtypes is described using the heterogeneity model, which allows a gene/SNP to be associated with prognosis of some subtypes but not others. A compound penalization method is developed to identify genes that contain important SNPs associated with prognosis. The proposed method has an intuitive formulation and is realized using an iterative algorithm. Asymptotic properties are rigorously established. Simulation shows that the proposed method has satisfactory performance and outperforms a penalization-based meta-analysis method and a regularized thresholding method. An NHL (non-Hodgkin lymphoma) prognosis study with SNP measurements is analyzed. Genes associated with the three major subtypes, namely DLBCL, FL, and CLL/SLL, are identified. The proposed method identifies genes that are different from alternatives and have important implications and satisfactory prediction performance. PMID:24766212
Who's in and why? A typology of stakeholder analysis methods for natural resource management.
Reed, Mark S; Graves, Anil; Dandy, Norman; Posthumus, Helena; Hubacek, Klaus; Morris, Joe; Prell, Christina; Quinn, Claire H; Stringer, Lindsay C
2009-04-01
Stakeholder analysis means many things to different people. Various methods and approaches have been developed in different fields for different purposes, leading to confusion over the concept and practice of stakeholder analysis. This paper asks how and why stakeholder analysis should be conducted for participatory natural resource management research. This is achieved by reviewing the development of stakeholder analysis in business management, development and natural resource management. The normative and instrumental theoretical basis for stakeholder analysis is discussed, and a stakeholder analysis typology is proposed. This consists of methods for: i) identifying stakeholders; ii) differentiating between and categorising stakeholders; and iii) investigating relationships between stakeholders. The range of methods that can be used to carry out each type of analysis is reviewed. These methods and approaches are then illustrated through a series of case studies funded through the Rural Economy and Land Use (RELU) programme. These case studies show the wide range of participatory and non-participatory methods that can be used, and discuss some of the challenges and limitations of existing methods for stakeholder analysis. The case studies also propose new tools and combinations of methods that can more effectively identify and categorise stakeholders and help understand their inter-relationships.
A method for identifying EMI critical circuits during development of a large C3
NASA Astrophysics Data System (ADS)
Barr, Douglas H.
The circuit analysis methods and process Boeing Aerospace used on a large, ground-based military command, control, and communications (C3) system are described. This analysis was designed to help identify electromagnetic interference (EMI) critical circuits. The methodology used the MIL-E-6051 equipment criticality categories as the basis for defining critical circuits, relational database technology to help sort through and account for all of the approximately 5000 system signal cables, and Macintosh Plus personal computers to predict critical circuits based on safety margin analysis. The EMI circuit analysis process systematically examined all system circuits to identify which ones were likely to be EMI critical. The process used two separate, sequential safety margin analyses to identify critical circuits (conservative safety margin analysis, and detailed safety margin analysis). These analyses used field-to-wire and wire-to-wire coupling models using both worst-case and detailed circuit parameters (physical and electrical) to predict circuit safety margins. This process identified the predicted critical circuits that could then be verified by test.
School Foodservice Personnel's Struggle with Using Labels to Identify Whole-Grain Foods
ERIC Educational Resources Information Center
Chu, Yen Li; Orsted, Mary; Marquart, Len; Reicks, Marla
2012-01-01
Objective: To describe how school foodservice personnel use current labeling methods to identify whole-grain products and the influence on purchasing for school meals. Methods: Focus groups explored labeling methods to identify whole-grain products and barriers to incorporating whole-grain foods in school meals. Qualitative analysis procedures and…
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Blasco, H; Błaszczyński, J; Billaut, J C; Nadal-Desbarats, L; Pradat, P F; Devos, D; Moreau, C; Andres, C R; Emond, P; Corcia, P; Słowiński, R
2015-02-01
Metabolomics is an emerging field that includes ascertaining a metabolic profile from a combination of small molecules, and which has health applications. Metabolomic methods are currently applied to discover diagnostic biomarkers and to identify pathophysiological pathways involved in pathology. However, metabolomic data are complex and are usually analyzed by statistical methods. Although the methods have been widely described, most have not been either standardized or validated. Data analysis is the foundation of a robust methodology, so new mathematical methods need to be developed to assess and complement current methods. We therefore applied, for the first time, the dominance-based rough set approach (DRSA) to metabolomics data; we also assessed the complementarity of this method with standard statistical methods. Some attributes were transformed in a way allowing us to discover global and local monotonic relationships between condition and decision attributes. We used previously published metabolomics data (18 variables) for amyotrophic lateral sclerosis (ALS) and non-ALS patients. Principal Component Analysis (PCA) and Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA) allowed satisfactory discrimination (72.7%) between ALS and non-ALS patients. Some discriminant metabolites were identified: acetate, acetone, pyruvate and glutamine. The concentrations of acetate and pyruvate were also identified by univariate analysis as significantly different between ALS and non-ALS patients. DRSA correctly classified 68.7% of the cases and established rules involving some of the metabolites highlighted by OPLS-DA (acetate and acetone). Some rules identified potential biomarkers not revealed by OPLS-DA (beta-hydroxybutyrate). We also found a large number of common discriminating metabolites after Bayesian confirmation measures, particularly acetate, pyruvate, acetone and ascorbate, consistent with the pathophysiological pathways involved in ALS. DRSA provides a complementary method for improving the predictive performance of the multivariate data analysis usually used in metabolomics. This method could help in the identification of metabolites involved in disease pathogenesis. Interestingly, these different strategies mostly identified the same metabolites as being discriminant. The selection of strong decision rules with high value of Bayesian confirmation provides useful information about relevant condition-decision relationships not otherwise revealed in metabolomics data. Copyright © 2014 Elsevier Inc. All rights reserved.
Method of identifying hairpin DNA probes by partial fold analysis
Miller, Benjamin L [Penfield, NY; Strohsahl, Christopher M [Saugerties, NY
2009-10-06
Method of identifying molecular beacons in which a secondary structure prediction algorithm is employed to identify oligonucleotide sequences within a target gene having the requisite hairpin structure. Isolated oligonucleotides, molecular beacons prepared from those oligonucleotides, and their use are also disclosed.
Method of identifying hairpin DNA probes by partial fold analysis
Miller, Benjamin L.; Strohsahl, Christopher M.
2008-10-28
Methods of identifying molecular beacons in which a secondary structure prediction algorithm is employed to identify oligonucleotide sequences within a target gene having the requisite hairpin structure. Isolated oligonucleotides, molecular beacons prepared from those oligonucleotides, and their use are also disclosed.
ERIC Educational Resources Information Center
Nickles, George
2007-01-01
This article describes using Work Action Analysis (WAA) as a method for identifying requirements for a web-based portal that supports a professional development program. WAA is a cognitive systems engineering method for modeling multi-agent systems to support design and evaluation. A WAA model of the professional development program of the…
ERIC Educational Resources Information Center
Hoffman, John L.; Bresciani, Marilee J.
2012-01-01
This mixed method study explored the professional competencies that administrators expect from entry-, mid-, and senior-level professionals as reflected in 1,759 job openings posted in 2008. Knowledge, skill, and dispositional competencies were identified during the qualitative phase of the study. Statistical analysis of the prevalence of…
Wilson, Paul; Larminie, Christopher; Smith, Rona
2016-01-01
To use literature mining to catalogue Behçet's associated genes, and advanced computational methods to improve the understanding of the pathways and signalling mechanisms that lead to the typical clinical characteristics of Behçet's patients. To extend this technique to identify potential treatment targets for further experimental validation. Text mining methods combined with gene enrichment tools, pathway analysis and causal analysis algorithms. This approach identified 247 human genes associated with Behçet's disease and the resulting disease map, comprising 644 nodes and 19220 edges, captured important details of the relationships between these genes and their associated pathways, as described in diverse data repositories. Pathway analysis has identified how Behçet's associated genes are likely to participate in innate and adaptive immune responses. Causal analysis algorithms have identified a number of potential therapeutic strategies for further investigation. Computational methods have captured pertinent features of the prominent disease characteristics presented in Behçet's disease and have highlighted NOD2, ICOS and IL18 signalling as potential therapeutic strategies.
Yu, Chunhao; Wang, Chong-Zhi; Zhou, Chun-Jie; Wang, Bin; Han, Lide; Zhang, Chun-Feng; Wu, Xiao-Hui; Yuan, Chun-Su
2014-01-01
American ginseng (Panax quinquefolius) is originally grown in North America. Due to price difference and supply shortage, American ginseng recently has been cultivated in northern China. Further, in the market, some Asian ginsengs are labeled as American ginseng. In this study, forty-three American ginseng samples cultivated in the USA, Canada or China were collected and 14 ginseng saponins were determined using HPLC. HPLC coupled with hierarchical cluster analysis and principal component analysis was developed to identify the species. Subsequently, an HPLC-linear discriminant analysis was established to discriminate cultivation regions of American ginseng. This method was successfully applied to identify the sources of 6 commercial American ginseng samples. Two of them were identified as Asian ginseng, while 4 others were identified as American ginseng, which were cultivated in the USA (3) and China (1). Our newly developed method can be used to identify American ginseng with different cultivation regions. PMID:25044150
Xu, Ning; Zhou, Guofu; Li, Xiaojuan; Lu, Heng; Meng, Fanyun; Zhai, Huaqiang
2017-05-01
A reliable and comprehensive method for identifying the origin and assessing the quality of Epimedium has been developed. The method is based on analysis of HPLC fingerprints, combined with similarity analysis, hierarchical cluster analysis (HCA), principal component analysis (PCA) and multi-ingredient quantitative analysis. Nineteen batches of Epimedium, collected from different areas in the western regions of China, were used to establish the fingerprints and 18 peaks were selected for the analysis. Similarity analysis, HCA and PCA all classified the 19 areas into three groups. Simultaneous quantification of the five major bioactive ingredients in the Epimedium samples was also carried out to confirm the consistency of the quality tests. These methods were successfully used to identify the geographical origin of the Epimedium samples and to evaluate their quality. Copyright © 2016 John Wiley & Sons, Ltd.
Risk Analysis Methods for Deepwater Port Oil Transfer Systems
DOT National Transportation Integrated Search
1976-06-01
This report deals with the risk analysis methodology for oil spills from the oil transfer systems in deepwater ports. Failure mode and effect analysis in combination with fault tree analysis are identified as the methods best suited for the assessmen...
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
Biomarkers identified by urinary metabonomics for noninvasive diagnosis of nutritional rickets.
Wang, Maoqing; Yang, Xue; Ren, Lihong; Li, Songtao; He, Xuan; Wu, Xiaoyan; Liu, Tingting; Lin, Liqun; Li, Ying; Sun, Changhao
2014-09-05
Nutritional rickets is a worldwide public health problem; however, the current diagnostic methods retain shortcomings for accurate diagnosis of nutritional rickets. To identify urinary biomarkers associated with nutritional rickets and establish a noninvasive diagnosis method, urinary metabonomics analysis by ultra-performance liquid chromatography/quadrupole time-of-flight tandem mass spectrometry and multivariate statistical analysis were employed to investigate the metabolic alterations associated with nutritional rickets in 200 children with or without nutritional rickets. The pathophysiological changes and pathogenesis of nutritional rickets were illustrated by the identified biomarkers. By urinary metabolic profiling, 31 biomarkers of nutritional rickets were identified and five candidate biomarkers for clinical diagnosis were screened and identified by quantitative analysis and receiver operating curve analysis. Urinary levels of five candidate biomarkers were measured using mass spectrometry or commercial kits. In the validation step, the combination of phosphate and sebacic acid was able to give a noninvasive and accurate diagnostic with high sensitivity (94.0%) and specificity (71.2%). Furthermore, on the basis of the pathway analysis of biomarkers, our urinary metabonomics analysis gives new insight into the pathogenesis and pathophysiology of nutritional rickets.
Fong, Allan; Clark, Lindsey; Cheng, Tianyi; Franklin, Ella; Fernandez, Nicole; Ratwani, Raj; Parker, Sarah Henrickson
2017-07-01
The objective of this paper is to identify attribute patterns of influential individuals in intensive care units using unsupervised cluster analysis. Despite the acknowledgement that culture of an organisation is critical to improving patient safety, specific methods to shift culture have not been explicitly identified. A social network analysis survey was conducted and an unsupervised cluster analysis was used. A total of 100 surveys were gathered. Unsupervised cluster analysis was used to group individuals with similar dimensions highlighting three general genres of influencers: well-rounded, knowledge and relational. Culture is created locally by individual influencers. Cluster analysis is an effective way to identify common characteristics among members of an intensive care unit team that are noted as highly influential by their peers. To change culture, identifying and then integrating the influencers in intervention development and dissemination may create more sustainable and effective culture change. Additional studies are ongoing to test the effectiveness of utilising these influencers to disseminate patient safety interventions. This study offers an approach that can be helpful in both identifying and understanding influential team members and may be an important aspect of developing methods to change organisational culture. © 2017 John Wiley & Sons Ltd.
Ohshima, Chihiro; Takahashi, Hajime; Phraephaisarn, Chirapiphat; Vesaratchavest, Mongkol; Keeratipibul, Suwimon; Kuda, Takashi; Kimura, Bon
2014-01-01
Listeria monocytogenes is the causative bacteria of listeriosis, which has a higher mortality rate than that of other causes of food poisoning. Listeria spp., of which L. monocytogenes is a member, have been isolated from food and manufacturing environments. Several methods have been published for identifying Listeria spp.; however, many of the methods cannot identify newly categorized Listeria spp. Additionally, they are often not suitable for the food industry, owing to their complexity, cost, or time consumption. Recently, high-resolution melting analysis (HRMA), which exploits DNA-sequence differences, has received attention as a simple and quick genomic typing method. In the present study, a new method for the simple, rapid, and low-cost identification of Listeria spp. has been presented using the genes rarA and ldh as targets for HRMA. DNA sequences of 9 Listeria species were first compared, and polymorphisms were identified for each species for primer design. Species specificity of each HRM curve pattern was estimated using type strains of all the species. Among the 9 species, 7 were identified by HRMA using rarA gene, including 3 new species. The remaining 2 species were identified by HRMA of ldh gene. The newly developed HRMA method was then used to assess Listeria isolates from the food industry, and the method efficiency was compared to that of identification by 16S rDNA sequence analysis. The 2 methods were in coherence for 92.6% of the samples, demonstrating the high accuracy of HRMA. The time required for identifying Listeria spp. was substantially low, and the process was considerably simplified, providing a useful and precise method for processing multiple samples per day. Our newly developed method for identifying Listeria spp. is highly valuable; its use is not limited to the food industry, and it can be used for the isolates from the natural environment.
Ohshima, Chihiro; Takahashi, Hajime; Phraephaisarn, Chirapiphat; Vesaratchavest, Mongkol; Keeratipibul, Suwimon; Kuda, Takashi; Kimura, Bon
2014-01-01
Listeria monocytogenes is the causative bacteria of listeriosis, which has a higher mortality rate than that of other causes of food poisoning. Listeria spp., of which L. monocytogenes is a member, have been isolated from food and manufacturing environments. Several methods have been published for identifying Listeria spp.; however, many of the methods cannot identify newly categorized Listeria spp. Additionally, they are often not suitable for the food industry, owing to their complexity, cost, or time consumption. Recently, high-resolution melting analysis (HRMA), which exploits DNA-sequence differences, has received attention as a simple and quick genomic typing method. In the present study, a new method for the simple, rapid, and low-cost identification of Listeria spp. has been presented using the genes rarA and ldh as targets for HRMA. DNA sequences of 9 Listeria species were first compared, and polymorphisms were identified for each species for primer design. Species specificity of each HRM curve pattern was estimated using type strains of all the species. Among the 9 species, 7 were identified by HRMA using rarA gene, including 3 new species. The remaining 2 species were identified by HRMA of ldh gene. The newly developed HRMA method was then used to assess Listeria isolates from the food industry, and the method efficiency was compared to that of identification by 16S rDNA sequence analysis. The 2 methods were in coherence for 92.6% of the samples, demonstrating the high accuracy of HRMA. The time required for identifying Listeria spp. was substantially low, and the process was considerably simplified, providing a useful and precise method for processing multiple samples per day. Our newly developed method for identifying Listeria spp. is highly valuable; its use is not limited to the food industry, and it can be used for the isolates from the natural environment. PMID:24918440
RUAN, XIYUN; LI, HONGYUN; LIU, BO; CHEN, JIE; ZHANG, SHIBAO; SUN, ZEQIANG; LIU, SHUANGQING; SUN, FAHAI; LIU, QINGYONG
2015-01-01
The aim of the present study was to develop a novel method for identifying pathways associated with renal cell carcinoma (RCC) based on a gene co-expression network. A framework was established where a co-expression network was derived from the database as well as various co-expression approaches. First, the backbone of the network based on differentially expressed (DE) genes between RCC patients and normal controls was constructed by the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database. The differentially co-expressed links were detected by Pearson’s correlation, the empirical Bayesian (EB) approach and Weighted Gene Co-expression Network Analysis (WGCNA). The co-expressed gene pairs were merged by a rank-based algorithm. We obtained 842; 371; 2,883 and 1,595 co-expressed gene pairs from the co-expression networks of the STRING database, Pearson’s correlation EB method and WGCNA, respectively. Two hundred and eighty-one differentially co-expressed (DC) gene pairs were obtained from the merged network using this novel method. Pathway enrichment analysis based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and the network enrichment analysis (NEA) method were performed to verify feasibility of the merged method. Results of the KEGG and NEA pathway analyses showed that the network was associated with RCC. The suggested method was computationally efficient to identify pathways associated with RCC and has been identified as a useful complement to traditional co-expression analysis. PMID:26058425
The efficacy of wire and glue hair snares in identifying mesocarnivores
William J. Zielinski; Fredrick V. Schlexer; Kristine L. Pilgrim; Michael K. Schwartz
2006-01-01
Track plates and cameras are proven methods for detecting and identifying fishers (Martes pennant) and other mesocarnivores. But these methods are inadequate to achieve demographic and population-monitoring objectives that require identifying sex and individuals. Although noninvasive collection of biological material for genetic analysis (i.e.,...
A Practical Method of Policy Analysis by Simulating Policy Options
ERIC Educational Resources Information Center
Phelps, James L.
2011-01-01
This article focuses on a method of policy analysis that has evolved from the previous articles in this issue. The first section, "Toward a Theory of Educational Production," identifies concepts from science and achievement production to be incorporated into this policy analysis method. Building on Kuhn's (1970) discussion regarding paradigms, the…
Wang, Qi; Zhao, Xiao-Juan; Wang, Zi-Wei; Liu, Li; Wei, Yong-Xin; Han, Xiao; Zeng, Jing; Liao, Wan-Jin
2017-08-01
Rapid and precise identification of Cronobacter species is important for foodborne pathogen detection, however, commercial biochemical methods can only identify Cronobacter strains to genus level in most cases. To evaluate the power of mass spectrometry based on matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF MS) for Cronobacter species identification, 51 Cronobacter strains (eight reference and 43 wild strains) were identified by both MALDI-TOF MS and 16S rRNA gene sequencing. Biotyper RTC provided by Bruker identified all eight reference and 43 wild strains as Cronobacter species, which demonstrated the power of MALDI-TOF MS to identify Cronobacter strains to genus level. However, using the Bruker's database (6903 main spectra products) and Biotyper software, the MALDI-TOF MS analysis could not identify the investigated strains to species level. When MALDI-TOF MS analysis was performed using the combined in-house Cronobacter database and Bruker's database, bin setting, and unweighted pair group method with arithmetic mean (UPGMA) clustering, all the 51 strains were clearly identified into six Cronobacter species and the identification accuracy increased from 60% to 100%. We demonstrated that MALDI-TOF MS was reliable and easy-to-use for Cronobacter species identification and highlighted the importance of establishing a reliable database and improving the current data analysis methods by integrating the bin setting and UPGMA clustering. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Blanchette, Judith
2012-01-01
The purpose of this empirical study was to determine the extent to which three different objective analytical methods--sequence analysis, surface cohesion analysis, and lexical cohesion analysis--can most accurately identify specific characteristics of online interaction. Statistically significant differences were found in all points of…
Horne, Benjamin D; Malhotra, Alka; Camp, Nicola J
2003-01-01
Background High triglycerides (TG) and low high-density lipoprotein cholesterol (HDL-C) jointly increase coronary disease risk. We performed linkage analysis for TG/HDL-C ratio in the Framingham Heart Study data as a quantitative trait, using methods implemented in LINKAGE, GENEHUNTER (GH), MCLINK, and SOLAR. Results were compared to each other and to those from a previous evaluation using SOLAR for TG/HDL-C ratio on this sample. We also investigated linked pedigrees in each region using by-pedigree analysis. Results Fourteen regions with at least suggestive linkage evidence were identified, including some that may increase and some that may decrease coronary risk. Ten of the 14 regions were identified by more than one analysis, and several of these regions were not previously detected. The best regions identified for each method were on chromosomes 2 (LOD = 2.29, MCLINK), 5 (LOD = 2.65, GH), 7 (LOD = 2.67, SOLAR), and 22 (LOD = 3.37, LINKAGE). By-pedigree multi-point LOD values in MCLINK showed linked pedigrees for all five regions, ranging from 3 linked pedigrees (chromosome 5) to 14 linked pedigrees (chromosome 7), and suggested localizations of between 9 cM and 27 cM in size. Conclusion Reasonable concordance was found across analysis methods. No single method identified all regions, either by full sample LOD or with by-pedigree analysis. Concordance across methods appeared better at the pedigree level, with many regions showing by-pedigree support in MCLINK when no evidence was observed in the full sample. Thus, investigating by-pedigree linkage evidence may provide a useful tool for evaluating linkage regions. PMID:14975161
Horne, Benjamin D; Malhotra, Alka; Camp, Nicola J
2003-12-31
High triglycerides (TG) and low high-density lipoprotein cholesterol (HDL-C) jointly increase coronary disease risk. We performed linkage analysis for TG/HDL-C ratio in the Framingham Heart Study data as a quantitative trait, using methods implemented in LINKAGE, GENEHUNTER (GH), MCLINK, and SOLAR. Results were compared to each other and to those from a previous evaluation using SOLAR for TG/HDL-C ratio on this sample. We also investigated linked pedigrees in each region using by-pedigree analysis. Fourteen regions with at least suggestive linkage evidence were identified, including some that may increase and some that may decrease coronary risk. Ten of the 14 regions were identified by more than one analysis, and several of these regions were not previously detected. The best regions identified for each method were on chromosomes 2 (LOD = 2.29, MCLINK), 5 (LOD = 2.65, GH), 7 (LOD = 2.67, SOLAR), and 22 (LOD = 3.37, LINKAGE). By-pedigree multi-point LOD values in MCLINK showed linked pedigrees for all five regions, ranging from 3 linked pedigrees (chromosome 5) to 14 linked pedigrees (chromosome 7), and suggested localizations of between 9 cM and 27 cM in size. Reasonable concordance was found across analysis methods. No single method identified all regions, either by full sample LOD or with by-pedigree analysis. Concordance across methods appeared better at the pedigree level, with many regions showing by-pedigree support in MCLINK when no evidence was observed in the full sample. Thus, investigating by-pedigree linkage evidence may provide a useful tool for evaluating linkage regions.
Text analysis methods, text analysis apparatuses, and articles of manufacture
Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M
2014-10-28
Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.
Expediting Combinatorial Data Set Analysis by Combining Human and Algorithmic Analysis.
Stein, Helge Sören; Jiao, Sally; Ludwig, Alfred
2017-01-09
A challenge in combinatorial materials science remains the efficient analysis of X-ray diffraction (XRD) data and its correlation to functional properties. Rapid identification of phase-regions and proper assignment of corresponding crystal structures is necessary to keep pace with the improved methods for synthesizing and characterizing materials libraries. Therefore, a new modular software called htAx (high-throughput analysis of X-ray and functional properties data) is presented that couples human intelligence tasks used for "ground-truth" phase-region identification with subsequent unbiased verification by an algorithm to efficiently analyze which phases are present in a materials library. Identified phases and phase-regions may then be correlated to functional properties in an expedited manner. For the functionality of htAx to be proven, two previously published XRD benchmark data sets of the materials systems Al-Cr-Fe-O and Ni-Ti-Cu are analyzed by htAx. The analysis of ∼1000 XRD patterns takes less than 1 day with htAx. The proposed method reliably identifies phase-region boundaries and robustly identifies multiphase structures. The method also addresses the problem of identifying regions with previously unpublished crystal structures using a special daisy ternary plot.
Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests
Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender
2015-01-01
Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728
Transient analysis mode participation for modal survey target mode selection using MSC/NASTRAN DMAP
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Ibrahim, Omar M.; Sullivan, Timothy L.; Goodnight, Thomas W.
1994-01-01
Many methods have been developed to aid analysts in identifying component modes which contribute significantly to component responses. These modes, typically targeted for dynamic model correlation via a modal survey, are known as target modes. Most methods used to identify target modes are based on component global dynamic behavior. It is sometimes unclear if these methods identify all modes contributing to responses important to the analyst. These responses are usually those in areas of hardware design concerns. One method used to check the completeness of target mode sets and identify modes contributing significantly to important component responses is mode participation. With this method, the participation of component modes in dynamic responses is quantified. Those modes which have high participation are likely modal survey target modes. Mode participation is most beneficial when it is used with responses from analyses simulating actual flight events. For spacecraft, these responses are generated via a structural dynamic coupled loads analysis. Using MSC/NASTRAN DMAP, a method has been developed for calculating mode participation based on transient coupled loads analysis results. The algorithm has been implemented to be compatible with an existing coupled loads methodology and has been used successfully to develop a set of modal survey target modes.
Atomistic cluster alignment method for local order mining in liquids and glasses
NASA Astrophysics Data System (ADS)
Fang, X. W.; Wang, C. Z.; Yao, Y. X.; Ding, Z. J.; Ho, K. M.
2010-11-01
An atomistic cluster alignment method is developed to identify and characterize the local atomic structural order in liquids and glasses. With the “order mining” idea for structurally disordered systems, the method can detect the presence of any type of local order in the system and can quantify the structural similarity between a given set of templates and the aligned clusters in a systematic and unbiased manner. Moreover, population analysis can also be carried out for various types of clusters in the system. The advantages of the method in comparison with other previously developed analysis methods are illustrated by performing the structural analysis for four prototype systems (i.e., pure Al, pure Zr, Zr35Cu65 , and Zr36Ni64 ). The results show that the cluster alignment method can identify various types of short-range orders (SROs) in these systems correctly while some of these SROs are difficult to capture by most of the currently available analysis methods (e.g., Voronoi tessellation method). Such a full three-dimensional atomistic analysis method is generic and can be applied to describe the magnitude and nature of noncrystalline ordering in many disordered systems.
Yi, Zhou; Manil-Ségalen, Marion; Sago, Laila; Glatigny, Annie; Redeker, Virginie; Legouis, Renaud; Mucchielli-Giorgi, Marie-Hélène
2016-05-06
Affinity purifications followed by mass spectrometric analysis are used to identify protein-protein interactions. Because quantitative proteomic data are noisy, it is necessary to develop statistical methods to eliminate false-positives and identify true partners. We present here a novel approach for filtering false interactors, named "SAFER" for mass Spectrometry data Analysis by Filtering of Experimental Replicates, which is based on the reproducibility of the replicates and the fold-change of the protein intensities between bait and control. To identify regulators or targets of autophagy, we characterized the interactors of LGG1, a ubiquitin-like protein involved in autophagosome formation in C. elegans. LGG-1 partners were purified by affinity, analyzed by nanoLC-MS/MS mass spectrometry, and quantified by a label-free proteomic approach based on the mass spectrometric signal intensity of peptide precursor ions. Because the selection of confident interactions depends on the method used for statistical analysis, we compared SAFER with several statistical tests and different scoring algorithms on this set of data. We show that SAFER recovers high-confidence interactors that have been ignored by the other methods and identified new candidates involved in the autophagy process. We further validated our method on a public data set and conclude that SAFER notably improves the identification of protein interactors.
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.
Evaluation of redundancy analysis to identify signatures of local adaptation.
Capblancq, Thibaut; Luu, Keurcien; Blum, Michael G B; Bazin, Eric
2018-05-26
Ordination is a common tool in ecology that aims at representing complex biological information in a reduced space. In landscape genetics, ordination methods such as principal component analysis (PCA) have been used to detect adaptive variation based on genomic data. Taking advantage of environmental data in addition to genotype data, redundancy analysis (RDA) is another ordination approach that is useful to detect adaptive variation. This paper aims at proposing a test statistic based on RDA to search for loci under selection. We compare redundancy analysis to pcadapt, which is a nonconstrained ordination method, and to a latent factor mixed model (LFMM), which is a univariate genotype-environment association method. Individual-based simulations identify evolutionary scenarios where RDA genome scans have a greater statistical power than genome scans based on PCA. By constraining the analysis with environmental variables, RDA performs better than PCA in identifying adaptive variation when selection gradients are weakly correlated with population structure. Additionally, we show that if RDA and LFMM have a similar power to identify genetic markers associated with environmental variables, the RDA-based procedure has the advantage to identify the main selective gradients as a combination of environmental variables. To give a concrete illustration of RDA in population genomics, we apply this method to the detection of outliers and selective gradients on an SNP data set of Populus trichocarpa (Geraldes et al., 2013). The RDA-based approach identifies the main selective gradient contrasting southern and coastal populations to northern and continental populations in the northwestern American coast. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Faucher, Mary Ann; Garner, Shelby L
2015-11-01
The purpose of this manuscript is to compare methods and thematic representations of the challenges and supports of family caregivers identified with photovoice methodology contrasted with content analysis, a more traditional qualitative approach. Results from a photovoice study utilizing a participatory action research framework was compared to an analysis of the audio-transcripts from that study utilizing content analysis methodology. Major similarities between the results are identified with some notable differences. Content analysis provides a more in-depth and abstract elucidation of the nature of the challenges and supports of the family caregiver. The comparison provides evidence to support the trustworthiness of photovoice methodology with limitations identified. The enhanced elaboration of theme and categories with content analysis may have some advantages relevant to the utilization of this knowledge by health care professionals. Copyright © 2015 Elsevier Inc. All rights reserved.
Pseudotargeted MS Method for the Sensitive Analysis of Protein Phosphorylation in Protein Complexes.
Lyu, Jiawen; Wang, Yan; Mao, Jiawei; Yao, Yating; Wang, Shujuan; Zheng, Yong; Ye, Mingliang
2018-05-15
In this study, we presented an enrichment-free approach for the sensitive analysis of protein phosphorylation in minute amounts of samples, such as purified protein complexes. This method takes advantage of the high sensitivity of parallel reaction monitoring (PRM). Specifically, low confident phosphopeptides identified from the data-dependent acquisition (DDA) data set were used to build a pseudotargeted list for PRM analysis to allow the identification of additional phosphopeptides with high confidence. The development of this targeted approach is very easy as the same sample and the same LC-system were used for the discovery and the targeted analysis phases. No sample fractionation or enrichment was required for the discovery phase which allowed this method to analyze minute amount of sample. We applied this pseudotargeted MS method to quantitatively examine phosphopeptides in affinity purified endogenous Shc1 protein complexes at four temporal stages of EGF signaling and identified 82 phospho-sites. To our knowledge, this is the highest number of phospho-sites identified from the protein complexes. This pseudotargeted MS method is highly sensitive in the identification of low abundance phosphopeptides and could be a powerful tool to study phosphorylation-regulated assembly of protein complex.
How Is Wilson Disease Inherited?
... ATP7B gene have been identified thus far. Testing Methods Available Linkage analysis (Haplotype analysis) Molecular genetic testing ... genetic counselor who can carefully discuss the best method of testing to perform and the benefits, limitations, ...
Data from quantitative label free proteomics analysis of rat spleen.
Dudekula, Khadar; Le Bihan, Thierry
2016-09-01
The dataset presented in this work has been obtained using a label-free quantitative proteomic analysis of rat spleen. A robust method for extraction of proteins from rat spleen tissue and LC-MS-MS analysis was developed using a urea and SDS-based buffer. Different fractionation methods were compared. A total of 3484 different proteins were identified from the pool of all experiments run in this study (a total of 2460 proteins with at least two peptides). A total of 1822 proteins were identified from nine non-fractionated pulse gels, 2288 proteins and 2864 proteins were identified by SDS-PAGE fractionation into three and five fractions respectively. The proteomics data are deposited in ProteomeXchange Consortium via PRIDE PXD003520, Progenesis and Maxquant output are presented in the supported information. The generated list of proteins under different regimes of fractionation allow assessing the nature of the identified proteins; variability in the quantitative analysis associated with the different sampling strategy and allow defining a proper number of replicates for future quantitative analysis.
Co-authorship network analysis in health research: method and potential use.
Fonseca, Bruna de Paula Fonseca E; Sampaio, Ricardo Barros; Fonseca, Marcus Vinicius de Araújo; Zicker, Fabio
2016-04-30
Scientific collaboration networks are a hallmark of contemporary academic research. Researchers are no longer independent players, but members of teams that bring together complementary skills and multidisciplinary approaches around common goals. Social network analysis and co-authorship networks are increasingly used as powerful tools to assess collaboration trends and to identify leading scientists and organizations. The analysis reveals the social structure of the networks by identifying actors and their connections. This article reviews the method and potential applications of co-authorship network analysis in health. The basic steps for conducting co-authorship studies in health research are described and common network metrics are presented. The application of the method is exemplified by an overview of the global research network for Chikungunya virus vaccines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cobb, G.P.; Braman, R.S.; Gilbert, R.A.
Atmospheric organics were sampled and analyzed by using the carbon hollow tube-gas chromatography method. Chromatograms from spice mixtures, cigarettes, and ambient air were analyzed. Principal factor analysis of row order chromatographic data produces factors which are eigenchromatograms of the components in the samples. Component sources are identified from the eigenchromatograms in all experiments and the individual eigenchromatogram corresponding to a particular source is determined in most cases. Organic sources in ambient air and in cigaretts are identified with 87% certainty. Analysis of clove cigarettes allows the determination of the relative amount of clove in different cigarettes. A new nondestructive qualitymore » control method using the hollow tube-gas chromatography analysis is discussed.« less
Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher
2012-01-01
Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.
Pathway Analysis in Attention Deficit Hyperactivity Disorder: An Ensemble Approach
Mooney, Michael A.; McWeeney, Shannon K.; Faraone, Stephen V.; Hinney, Anke; Hebebrand, Johannes; Nigg, Joel T.; Wilmot, Beth
2016-01-01
Despite a wealth of evidence for the role of genetics in attention deficit hyperactivity disorder (ADHD), specific and definitive genetic mechanisms have not been identified. Pathway analyses, a subset of gene-set analyses, extend the knowledge gained from genome-wide association studies (GWAS) by providing functional context for genetic associations. However, there are numerous methods for association testing of gene sets and no real consensus regarding the best approach. The present study applied six pathway analysis methods to identify pathways associated with ADHD in two GWAS datasets from the Psychiatric Genomics Consortium. Methods that utilize genotypes to model pathway-level effects identified more replicable pathway associations than methods using summary statistics. In addition, pathways implicated by more than one method were significantly more likely to replicate. A number of brain-relevant pathways, such as RhoA signaling, glycosaminoglycan biosynthesis, fibroblast growth factor receptor activity, and pathways containing potassium channel genes, were nominally significant by multiple methods in both datasets. These results support previous hypotheses about the role of regulation of neurotransmitter release, neurite outgrowth and axon guidance in contributing to the ADHD phenotype and suggest the value of cross-method convergence in evaluating pathway analysis results. PMID:27004716
Ju, Jin Hyun; Crystal, Ronald G.
2017-01-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156
Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G
2017-05-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.
Model-free fMRI group analysis using FENICA.
Schöpf, V; Windischberger, C; Robinson, S; Kasess, C H; Fischmeister, F PhS; Lanzenberger, R; Albrecht, J; Kleemann, A M; Kopietz, R; Wiesmann, M; Moser, E
2011-03-01
Exploratory analysis of functional MRI data allows activation to be detected even if the time course differs from that which is expected. Independent Component Analysis (ICA) has emerged as a powerful approach, but current extensions to the analysis of group studies suffer from a number of drawbacks: they can be computationally demanding, results are dominated by technical and motion artefacts, and some methods require that time courses be the same for all subjects or that templates be defined to identify common components. We have developed a group ICA (gICA) method which is based on single-subject ICA decompositions and the assumption that the spatial distribution of signal changes in components which reflect activation is similar between subjects. This approach, which we have called Fully Exploratory Network Independent Component Analysis (FENICA), identifies group activation in two stages. ICA is performed on the single-subject level, then consistent components are identified via spatial correlation. Group activation maps are generated in a second-level GLM analysis. FENICA is applied to data from three studies employing a wide range of stimulus and presentation designs. These are an event-related motor task, a block-design cognition task and an event-related chemosensory experiment. In all cases, the group maps identified by FENICA as being the most consistent over subjects correspond to task activation. There is good agreement between FENICA results and regions identified in prior GLM-based studies. In the chemosensory task, additional regions are identified by FENICA and temporal concatenation ICA that we show is related to the stimulus, but exhibit a delayed response. FENICA is a fully exploratory method that allows activation to be identified without assumptions about temporal evolution, and isolates activation from other sources of signal fluctuation in fMRI. It has the advantage over other gICA methods that it is computationally undemanding, spotlights components relating to activation rather than artefacts, allows the use of familiar statistical thresholding through deployment of a higher level GLM analysis and can be applied to studies where the paradigm is different for all subjects. Copyright © 2010 Elsevier Inc. All rights reserved.
Liu, Rui-Sang; Jin, Guang-Huai; Xiao, Deng-Rong; Li, Hong-Mei; Bai, Feng-Wu; Tang, Ya-Jie
2015-01-01
Aroma results from the interplay of volatile organic compounds (VOCs) and the attributes of microbial-producing aromas are significantly affected by fermentation conditions. Among the VOCs, only a few of them contribute to aroma. Thus, screening and identification of the key VOCs is critical for microbial-producing aroma. The traditional method is based on gas chromatography-olfactometry (GC-O), which is time-consuming and laborious. Considering the Tuber melanosporum fermentation system as an example, a new method to screen and identify the key VOCs by combining the aroma evaluation method with principle component analysis (PCA) was developed in this work. First, an aroma sensory evaluation method was developed to screen 34 potential favorite aroma samples from 504 fermentation samples. Second, PCA was employed to screen nine common key VOCs from these 34 samples. Third, seven key VOCs were identified by the traditional method. Finally, all of the seven key VOCs identified by the traditional method were also identified, along with four others, by the new strategy. These results indicate the reliability of the new method and demonstrate it to be a viable alternative to the traditional method. PMID:26655663
Pathways to Lean Software Development: An Analysis of Effective Methods of Change
ERIC Educational Resources Information Center
Hanson, Richard D.
2014-01-01
This qualitative Delphi study explored the challenges that exist in delivering software on time, within budget, and with the original scope identified. The literature review identified many attempts over the past several decades to reform the methods used to develop software. These attempts found that the classical waterfall method, which is…
Methods for land use impact assessment: A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perminova, Tataina, E-mail: tatiana.perminova@utt.fr; Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk; Sirina, Natalia, E-mail: natalia.sirina@utt.fr
Many types of methods to assess land use impact have been developed. Nevertheless a systematic synthesis of all these approaches is necessary to highlight the most commonly used and most effective methods. Given the growing interest in this area of research, a review of the different methods of assessing land use impact (LUI) was performed using bibliometric analysis. One hundred eighty seven articles of agricultural and biological science, and environmental sciences were examined. According to our results, the most frequently used land use assessment methods are Life-Cycle Assessment, Material Flow Analysis/Input–Output Analysis, Environmental Impact Assessment and Ecological Footprint. Comparison ofmore » the methods allowed their specific features to be identified and to arrive at the conclusion that a combination of several methods is the best basis for a comprehensive analysis of land use impact assessment. - Highlights: • We identified the most frequently used methods in land use impact assessment. • A comparison of the methods based on several criteria was carried out. • Agricultural land use is by far the most common area of study within the methods. • Incentive driven methods, like LCA, arouse the most interest in this field.« less
Computational Methods to Work as First-Pass Filter in Deleterious SNP Analysis of Alkaptonuria
Magesh, R.; George Priya Doss, C.
2012-01-01
A major challenge in the analysis of human genetic variation is to distinguish functional from nonfunctional SNPs. Discovering these functional SNPs is one of the main goals of modern genetics and genomics studies. There is a need to effectively and efficiently identify functionally important nsSNPs which may be deleterious or disease causing and to identify their molecular effects. The prediction of phenotype of nsSNPs by computational analysis may provide a good way to explore the function of nsSNPs and its relationship with susceptibility to disease. In this context, we surveyed and compared variation databases along with in silico prediction programs to assess the effects of deleterious functional variants on protein functions. In other respects, we attempted these methods to work as first-pass filter to identify the deleterious substitutions worth pursuing for further experimental research. In this analysis, we used the existing computational methods to explore the mutation-structure-function relationship in HGD gene causing alkaptonuria. PMID:22606059
BDA: A novel method for identifying defects in body-centered cubic crystals.
Möller, Johannes J; Bitzek, Erik
2016-01-01
The accurate and fast identification of crystallographic defects plays a key role for the analysis of atomistic simulation output data. For face-centered cubic (fcc) metals, most existing structure analysis tools allow for the direct distinction of common defects, such as stacking faults or certain low-index surfaces. For body-centered cubic (bcc) metals, on the other hand, a robust way to identify such defects is currently not easily available. We therefore introduce a new method for analyzing atomistic configurations of bcc metals, the BCC Defect Analysis (BDA). It uses existing structure analysis algorithms and combines their results to uniquely distinguish between typical defects in bcc metals. In essence, the BDA method offers the following features:•Identification of typical defect structures in bcc metals.•Reduction of erroneously identified defects by iterative comparison to the defects in the atom's neighborhood.•Availability as ready-to-use Python script for the widespread visualization tool OVITO [http://ovito.org].
Bhaganagarapu, Kaushik; Jackson, Graeme D; Abbott, David F
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available.
An Automated Method for Identifying Artifact in Independent Component Analysis of Resting-State fMRI
Bhaganagarapu, Kaushik; Jackson, Graeme D.; Abbott, David F.
2013-01-01
An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available. PMID:23847511
Systematic text condensation: a strategy for qualitative analysis.
Malterud, Kirsti
2012-12-01
To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies. Giorgi's psychological phenomenological analysis is the point of departure and inspiration for systematic text condensation. The basic elements of Giorgi's method and the elaboration of these in systematic text condensation are presented, followed by a detailed description of procedures for analysis according to systematic text condensation. Finally, similarities and differences compared with other frequently applied methods for qualitative analysis are identified, as the foundation of a discussion of strengths and limitations of systematic text condensation. Systematic text condensation is a descriptive and explorative method for thematic cross-case analysis of different types of qualitative data, such as interview studies, observational studies, and analysis of written texts. The method represents a pragmatic approach, although inspired by phenomenological ideas, and various theoretical frameworks can be applied. The procedure consists of the following steps: 1) total impression - from chaos to themes; 2) identifying and sorting meaning units - from themes to codes; 3) condensation - from code to meaning; 4) synthesizing - from condensation to descriptions and concepts. Similarities and differences comparing systematic text condensation with other frequently applied qualitative methods regarding thematic analysis, theoretical methodological framework, analysis procedures, and taxonomy are discussed. Systematic text condensation is a strategy for analysis developed from traditions shared by most of the methods for analysis of qualitative data. The method offers the novice researcher a process of intersubjectivity, reflexivity, and feasibility, while maintaining a responsible level of methodological rigour.
Harper, Angela F; Leuthaeuser, Janelle B; Babbitt, Patricia C; Morris, John H; Ferrin, Thomas E; Poole, Leslie B; Fetrow, Jacquelyn S
2017-02-01
Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially-MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method's novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences.
Torrens, George Edward
2018-01-01
Summative content analysis was used to define methods and heuristics from each case study. The review process was in two parts: (1) A literature review to identify conventional research methods and (2) a summative content analysis of published case studies, based on the identified methods and heuristics to suggest an order and priority of where and when were used. Over 200 research and design methods and design heuristics were identified. From the review of the 20 case studies 42 were identified as being applied. The majority of methods and heuristics were applied in phase two, market choice. There appeared a disparity between the limited numbers of methods frequently used, under 10 within the 20 case studies, when hundreds were available. Implications for Rehabilitation The communication highlights a number of issues that have implication for those involved in assistive technology new product development: •The study defined over 200 well-established research and design methods and design heuristics that are available for use by those who specify and design assistive technology products, which provide a comprehensive reference list for practitioners in the field; •The review within the study suggests only a limited number of research and design methods are regularly used by industrial design focused assistive technology new product developers; and, •Debate is required within the practitioners working in this field to reflect on how a wider range of potentially more effective methods and heuristics may be incorporated into daily working practice.
Comparison of normalization methods for the analysis of metagenomic gene abundance data.
Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik
2018-04-20
In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead to incorrect or obfuscated biological interpretation.
Vulnerabilities, Influences and Interaction Paths: Failure Data for Integrated System Risk Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land
2006-01-01
We describe graph-based analysis methods for identifying and analyzing cross-subsystem interaction risks from subsystem connectivity information. By discovering external and remote influences that would be otherwise unexpected, these methods can support better communication among subsystem designers at points of potential conflict and to support design of more dependable and diagnosable systems. These methods identify hazard causes that can impact vulnerable functions or entities if propagated across interaction paths from the hazard source to the vulnerable target. The analysis can also assess combined impacts of And-Or trees of disabling influences. The analysis can use ratings of hazards and vulnerabilities to calculate cumulative measures of the severity and importance. Identification of cross-subsystem hazard-vulnerability pairs and propagation paths across subsystems will increase coverage of hazard and risk analysis and can indicate risk control and protection strategies.
Liang, Sai; Qu, Shen; Xu, Ming
2016-02-02
To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
ENVIRONMENTAL METHODS TESTING SITE PROJECT: DATA MANAGEMENT PROCEDURES PLAN
The Environmental Methods Testing Site (EMTS) Data Management Procedures Plan identifies the computer hardware and software resources used in the EMTS project. It identifies the major software packages that are available for use by principal investigators for the analysis of data...
Kay, Robert T.; Mills, Patrick C.; Dunning, Charles P.; Yeskis, Douglas J.; Ursic, James R.; Vendl, Mark
2004-01-01
The effectiveness of 28 methods used to characterize the fractured Galena-Platteville aquifer at eight sites in northern Illinois and Wisconsin is evaluated. Analysis of government databases, previous investigations, topographic maps, aerial photographs, and outcrops was essential to understanding the hydrogeology in the area to be investigated. The effectiveness of surface-geophysical methods depended on site geology. Lithologic logging provided essential information for site characterization. Cores were used for stratigraphy and geotechnical analysis. Natural-gamma logging helped identify the effect of lithology on the location of secondary- permeability features. Caliper logging identified large secondary-permeability features. Neutron logs identified trends in matrix porosity. Acoustic-televiewer logs identified numerous secondary-permeability features and their orientation. Borehole-camera logs also identified a number of secondary-permeability features. Borehole ground-penetrating radar identified lithologic and secondary-permeability features. However, the accuracy and completeness of this method is uncertain. Single-point-resistance, density, and normal resistivity logs were of limited use. Water-level and water-quality data identified flow directions and indicated the horizontal and vertical distribution of aquifer permeability and the depth of the permeable features. Temperature, spontaneous potential, and fluid-resistivity logging identified few secondary-permeability features at some sites and several features at others. Flowmeter logging was the most effective geophysical method for characterizing secondary-permeability features. Aquifer tests provided insight into the permeability distribution, identified hydraulically interconnected features, the presence of heterogeneity and anisotropy, and determined effective porosity. Aquifer heterogeneity prevented calculation of accurate hydraulic properties from some tests. Different methods, such as flowmeter logging and slug testing, occasionally produced different interpretations. Aquifer characterization improved with an increase in the number of data points, the period of data collection, and the number of methods used.
ERIC Educational Resources Information Center
Ram, Nilam; Grimm, Kevin J.
2009-01-01
Growth mixture modeling (GMM) is a method for identifying multiple unobserved sub-populations, describing longitudinal change within each unobserved sub-population, and examining differences in change among unobserved sub-populations. We provide a practical primer that may be useful for researchers beginning to incorporate GMM analysis into their…
Development of a probabilistic analysis methodology for structural reliability estimation
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.
1991-01-01
The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.
How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods
2007-08-01
Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements
ON IDENTIFIABILITY OF NONLINEAR ODE MODELS AND APPLICATIONS IN VIRAL DYNAMICS
MIAO, HONGYU; XIA, XIAOHUA; PERELSON, ALAN S.; WU, HULIN
2011-01-01
Ordinary differential equations (ODE) are a powerful tool for modeling dynamic processes with wide applications in a variety of scientific fields. Over the last 2 decades, ODEs have also emerged as a prevailing tool in various biomedical research fields, especially in infectious disease modeling. In practice, it is important and necessary to determine unknown parameters in ODE models based on experimental data. Identifiability analysis is the first step in determing unknown parameters in ODE models and such analysis techniques for nonlinear ODE models are still under development. In this article, we review identifiability analysis methodologies for nonlinear ODE models developed in the past one to two decades, including structural identifiability analysis, practical identifiability analysis and sensitivity-based identifiability analysis. Some advanced topics and ongoing research are also briefly reviewed. Finally, some examples from modeling viral dynamics of HIV, influenza and hepatitis viruses are given to illustrate how to apply these identifiability analysis methods in practice. PMID:21785515
Cragun, Deborah; Pal, Tuya; Vadaparampil, Susan T; Baldwin, Julie; Hampel, Heather; DeBate, Rita D
2016-07-01
Qualitative comparative analysis (QCA) was developed over 25 years ago to bridge the qualitative and quantitative research gap. Upon searching PubMed and the Journal of Mixed Methods Research , this review identified 30 original research studies that utilized QCA. Perceptions that QCA is complex and provides few relative advantages over other methods may be limiting QCA adoption. Thus, to overcome these perceptions, this article demonstrates how to perform QCA using data from fifteen institutions that implemented universal tumor screening (UTS) programs to identify patients at high risk for hereditary colorectal cancer. In this example, QCA revealed a combination of conditions unique to effective UTS programs. Results informed additional research and provided a model for improving patient follow-through after a positive screen.
NASA Astrophysics Data System (ADS)
Melliana, Armen, Yusrizal, Akmal, Syarifah
2017-11-01
PT Nira Murni construction is a contractor of PT Chevron Pacific Indonesia which engaged in contractor, fabrication, maintenance construction suppliers, and labor services. The high of accident rate in this company is caused the lack of awareness of workplace safety. Therefore, it requires an effort to reduce the accident rate on the company so that the financial losses can be minimized. In this study, Safe T-Score method is used to analyze the accident rate by measuring the level of frequency. Analysis is continued using risk management methods which identify hazards, risk measurement and risk management. The last analysis uses Job safety analysis (JSA) which will identify the effect of accidents. From the result of this study can be concluded that Job Safety Analysis (JSA) methods has not been implemented properly. Therefore, JSA method needs to follow-up in the next study, so that can be well applied as prevention of occupational accidents.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Kerns, Sarah; Ostrer, Harry; Powell, Simon N.; Rosenstein, Barry; Deasy, Joseph O.
2017-02-01
The biological cause of clinically observed variability of normal tissue damage following radiotherapy is poorly understood. We hypothesized that machine/statistical learning methods using single nucleotide polymorphism (SNP)-based genome-wide association studies (GWAS) would identify groups of patients of differing complication risk, and furthermore could be used to identify key biological sources of variability. We developed a novel learning algorithm, called pre-conditioned random forest regression (PRFR), to construct polygenic risk models using hundreds of SNPs, thereby capturing genomic features that confer small differential risk. Predictive models were trained and validated on a cohort of 368 prostate cancer patients for two post-radiotherapy clinical endpoints: late rectal bleeding and erectile dysfunction. The proposed method results in better predictive performance compared with existing computational methods. Gene ontology enrichment analysis and protein-protein interaction network analysis are used to identify key biological processes and proteins that were plausible based on other published studies. In conclusion, we confirm that novel machine learning methods can produce large predictive models (hundreds of SNPs), yielding clinically useful risk stratification models, as well as identifying important underlying biological processes in the radiation damage and tissue repair process. The methods are generally applicable to GWAS data and are not specific to radiotherapy endpoints.
Lightfoot, Emma; O’Connell, Tamsin C.
2016-01-01
Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on) causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals’ homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific homeland should not be attempted. PMID:27124001
Nicolau, Monica; Levine, Arnold J; Carlsson, Gunnar
2011-04-26
High-throughput biological data, whether generated as sequencing, transcriptional microarrays, proteomic, or other means, continues to require analytic methods that address its high dimensional aspects. Because the computational part of data analysis ultimately identifies shape characteristics in the organization of data sets, the mathematics of shape recognition in high dimensions continues to be a crucial part of data analysis. This article introduces a method that extracts information from high-throughput microarray data and, by using topology, provides greater depth of information than current analytic techniques. The method, termed Progression Analysis of Disease (PAD), first identifies robust aspects of cluster analysis, then goes deeper to find a multitude of biologically meaningful shape characteristics in these data. Additionally, because PAD incorporates a visualization tool, it provides a simple picture or graph that can be used to further explore these data. Although PAD can be applied to a wide range of high-throughput data types, it is used here as an example to analyze breast cancer transcriptional data. This identified a unique subgroup of Estrogen Receptor-positive (ER(+)) breast cancers that express high levels of c-MYB and low levels of innate inflammatory genes. These patients exhibit 100% survival and no metastasis. No supervised step beyond distinction between tumor and healthy patients was used to identify this subtype. The group has a clear and distinct, statistically significant molecular signature, it highlights coherent biology but is invisible to cluster methods, and does not fit into the accepted classification of Luminal A/B, Normal-like subtypes of ER(+) breast cancers. We denote the group as c-MYB(+) breast cancer.
Discrete choice experiments of pharmacy services: a systematic review.
Vass, Caroline; Gray, Ewan; Payne, Katherine
2016-06-01
Background Two previous systematic reviews have summarised the application of discrete choice experiments to value preferences for pharmacy services. These reviews identified a total of twelve studies and described how discrete choice experiments have been used to value pharmacy services but did not describe or discuss the application of methods used in the design or analysis. Aims (1) To update the most recent systematic review and critically appraise current discrete choice experiments of pharmacy services in line with published reporting criteria and; (2) To provide an overview of key methodological developments in the design and analysis of discrete choice experiments. Methods The review used a comprehensive strategy to identify eligible studies (published between 1990 and 2015) by searching electronic databases for key terms related to discrete choice and best-worst scaling (BWS) experiments. All healthcare choice experiments were then hand-searched for key terms relating to pharmacy. Data were extracted using a published checklist. Results A total of 17 discrete choice experiments eliciting preferences for pharmacy services were identified for inclusion in the review. No BWS studies were identified. The studies elicited preferences from a variety of populations (pharmacists, patients, students) for a range of pharmacy services. Most studies were from a United Kingdom setting, although examples from Europe, Australia and North America were also identified. Discrete choice experiments for pharmacy services tended to include more attributes than non-pharmacy choice experiments. Few studies reported the use of qualitative research methods in the design and interpretation of the experiments (n = 9) or use of new methods of analysis to identify and quantify preference and scale heterogeneity (n = 4). No studies reported the use of Bayesian methods in their experimental design. Conclusion Incorporating more sophisticated methods in the design of pharmacy-related discrete choice experiments could help researchers produce more efficient experiments which are better suited to valuing complex pharmacy services. Pharmacy-related discrete choice experiments could also benefit from more sophisticated analytical techniques such as investigations into scale and preference heterogeneity. Employing these sophisticated methods for both design and analysis could extend the usefulness of discrete choice experiments to inform health and pharmacy policy.
Identification of the isomers using principal component analysis (PCA) method
NASA Astrophysics Data System (ADS)
Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur
2016-03-01
In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.
Structure-sequence based analysis for identification of conserved regions in proteins
Zemla, Adam T; Zhou, Carol E; Lam, Marisa W; Smith, Jason R; Pardes, Elizabeth
2013-05-28
Disclosed are computational methods, and associated hardware and software products for scoring conservation in a protein structure based on a computationally identified family or cluster of protein structures. A method of computationally identifying a family or cluster of protein structures in also disclosed herein.
Huo, Zhiguang; Ding, Ying; Liu, Silvia; Oesterreich, Steffi; Tseng, George
2016-01-01
Disease phenotyping by omics data has become a popular approach that potentially can lead to better personalized treatment. Identifying disease subtypes via unsupervised machine learning is the first step towards this goal. In this paper, we extend a sparse K-means method towards a meta-analytic framework to identify novel disease subtypes when expression profiles of multiple cohorts are available. The lasso regularization and meta-analysis identify a unique set of gene features for subtype characterization. An additional pattern matching reward function guarantees consistent subtype signatures across studies. The method was evaluated by simulations and leukemia and breast cancer data sets. The identified disease subtypes from meta-analysis were characterized with improved accuracy and stability compared to single study analysis. The breast cancer model was applied to an independent METABRIC dataset and generated improved survival difference between subtypes. These results provide a basis for diagnosis and development of targeted treatments for disease subgroups. PMID:27330233
Huo, Zhiguang; Ding, Ying; Liu, Silvia; Oesterreich, Steffi; Tseng, George
Disease phenotyping by omics data has become a popular approach that potentially can lead to better personalized treatment. Identifying disease subtypes via unsupervised machine learning is the first step towards this goal. In this paper, we extend a sparse K -means method towards a meta-analytic framework to identify novel disease subtypes when expression profiles of multiple cohorts are available. The lasso regularization and meta-analysis identify a unique set of gene features for subtype characterization. An additional pattern matching reward function guarantees consistent subtype signatures across studies. The method was evaluated by simulations and leukemia and breast cancer data sets. The identified disease subtypes from meta-analysis were characterized with improved accuracy and stability compared to single study analysis. The breast cancer model was applied to an independent METABRIC dataset and generated improved survival difference between subtypes. These results provide a basis for diagnosis and development of targeted treatments for disease subgroups.
Detecting complexes from edge-weighted PPI networks via genes expression analysis.
Zhang, Zehua; Song, Jian; Tang, Jijun; Xu, Xinying; Guo, Fei
2018-04-24
Identifying complexes from PPI networks has become a key problem to elucidate protein functions and identify signal and biological processes in a cell. Proteins binding as complexes are important roles of life activity. Accurate determination of complexes in PPI networks is crucial for understanding principles of cellular organization. We propose a novel method to identify complexes on PPI networks, based on different co-expression information. First, we use Markov Cluster Algorithm with an edge-weighting scheme to calculate complexes on PPI networks. Then, we propose some significant features, such as graph information and gene expression analysis, to filter and modify complexes predicted by Markov Cluster Algorithm. To evaluate our method, we test on two experimental yeast PPI networks. On DIP network, our method has Precision and F-Measure values of 0.6004 and 0.5528. On MIPS network, our method has F-Measure and S n values of 0.3774 and 0.3453. Comparing to existing methods, our method improves Precision value by at least 0.1752, F-Measure value by at least 0.0448, S n value by at least 0.0771. Experiments show that our method achieves better results than some state-of-the-art methods for identifying complexes on PPI networks, with the prediction quality improved in terms of evaluation criteria.
Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.
2015-01-01
PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895
NASA Astrophysics Data System (ADS)
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
Gimenez, Thais; Braga, Mariana Minatel; Raggio, Daniela Procida; Deery, Chris; Ricketts, David N; Mendes, Fausto Medeiros
2013-01-01
Fluorescence-based methods have been proposed to aid caries lesion detection. Summarizing and analysing findings of studies about fluorescence-based methods could clarify their real benefits. We aimed to perform a comprehensive systematic review and meta-analysis to evaluate the accuracy of fluorescence-based methods in detecting caries lesions. Two independent reviewers searched PubMed, Embase and Scopus through June 2012 to identify papers/articles published. Other sources were checked to identify non-published literature. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND DIAGNOSTIC METHODS: The eligibility criteria were studies that: (1) have assessed the accuracy of fluorescence-based methods of detecting caries lesions on occlusal, approximal or smooth surfaces, in both primary or permanent human teeth, in the laboratory or clinical setting; (2) have used a reference standard; and (3) have reported sufficient data relating to the sample size and the accuracy of methods. A diagnostic 2×2 table was extracted from included studies to calculate the pooled sensitivity, specificity and overall accuracy parameters (Diagnostic Odds Ratio and Summary Receiver-Operating curve). The analyses were performed separately for each method and different characteristics of the studies. The quality of the studies and heterogeneity were also evaluated. Seventy five studies met the inclusion criteria from the 434 articles initially identified. The search of the grey or non-published literature did not identify any further studies. In general, the analysis demonstrated that the fluorescence-based method tend to have similar accuracy for all types of teeth, dental surfaces or settings. There was a trend of better performance of fluorescence methods in detecting more advanced caries lesions. We also observed moderate to high heterogeneity and evidenced publication bias. Fluorescence-based devices have similar overall performance; however, better accuracy in detecting more advanced caries lesions has been observed.
SAMPLING AND ANALYSIS OF NANOMATERIALS IN THE ENVIRONMENT: A STATE-OF-THE-SCIENCE REVIEW
This state-of-the-science review was undertaken to identify and assess currently available sampling and analysis methods to identify and quantify the occurrence of nanomaterials in the environment. The environmental and human health risks associated with nanomaterials are largely...
Improving Family Forest Knowledge Transfer through Social Network Analysis
ERIC Educational Resources Information Center
Gorczyca, Erika L.; Lyons, Patrick W.; Leahy, Jessica E.; Johnson, Teresa R.; Straub, Crista L.
2012-01-01
To better engage Maine's family forest landowners our study used social network analysis: a computational social science method for identifying stakeholders, evaluating models of engagement, and targeting areas for enhanced partnerships. Interviews with researchers associated with a research center were conducted to identify how social network…
ADAGE signature analysis: differential expression analysis with data-defined gene sets.
Tan, Jie; Huyck, Matthew; Hu, Dongbo; Zelaya, René A; Hogan, Deborah A; Greene, Casey S
2017-11-22
Gene set enrichment analysis and overrepresentation analyses are commonly used methods to determine the biological processes affected by a differential expression experiment. This approach requires biologically relevant gene sets, which are currently curated manually, limiting their availability and accuracy in many organisms without extensively curated resources. New feature learning approaches can now be paired with existing data collections to directly extract functional gene sets from big data. Here we introduce a method to identify perturbed processes. In contrast with methods that use curated gene sets, this approach uses signatures extracted from public expression data. We first extract expression signatures from public data using ADAGE, a neural network-based feature extraction approach. We next identify signatures that are differentially active under a given treatment. Our results demonstrate that these signatures represent biological processes that are perturbed by the experiment. Because these signatures are directly learned from data without supervision, they can identify uncurated or novel biological processes. We implemented ADAGE signature analysis for the bacterial pathogen Pseudomonas aeruginosa. For the convenience of different user groups, we implemented both an R package (ADAGEpath) and a web server ( http://adage.greenelab.com ) to run these analyses. Both are open-source to allow easy expansion to other organisms or signature generation methods. We applied ADAGE signature analysis to an example dataset in which wild-type and ∆anr mutant cells were grown as biofilms on the Cystic Fibrosis genotype bronchial epithelial cells. We mapped active signatures in the dataset to KEGG pathways and compared with pathways identified using GSEA. The two approaches generally return consistent results; however, ADAGE signature analysis also identified a signature that revealed the molecularly supported link between the MexT regulon and Anr. We designed ADAGE signature analysis to perform gene set analysis using data-defined functional gene signatures. This approach addresses an important gap for biologists studying non-traditional model organisms and those without extensive curated resources available. We built both an R package and web server to provide ADAGE signature analysis to the community.
Identification of Uvaria sp by barcoding coupled with high-resolution melting analysis (Bar-HRM).
Osathanunkul, M; Madesis, P; Ounjai, S; Pumiputavon, K; Somboonchai, R; Lithanatudom, P; Chaowasku, T; Wipasa, J; Suwannapoom, C
2016-01-13
DNA barcoding, which was developed about a decade ago, relies on short, standardized regions of the genome to identify plant and animal species. This method can be used to not only identify known species but also to discover novel ones. Numerous sequences are stored in online databases worldwide. One of the ways to save cost and time (by omitting the sequencing step) in species identification is to use available barcode data to design optimized primers for further analysis, such as high-resolution melting analysis (HRM). This study aimed to determine the effectiveness of the hybrid method Bar-HRM (DNA barcoding combined with HRM) to identify species that share similar external morphological features, rather than conduct traditional taxonomic identification that require major parts (leaf, flower, fruit) of the specimens. The specimens used for testing were those, which could not be identified at the species level and could either be Uvaria longipes or Uvaria wrayias, indicated by morphological identification. Primer pairs derived from chloroplast regions (matK, psbA-trnH, rbcL, and trnL) were used in the Bar-HRM. The results obtained from psbA-trnH primers were good enough to help in identifying the specimen while the rest were not. Bar-HRM analysis was proven to be a fast and cost-effective method for plant species identification.
A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis
NASA Technical Reports Server (NTRS)
Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.
Wu, Shuaibin; Yang, Kaiguang; Liang, Zhen; Zhang, Lihua; Zhang, Yukui
2011-10-30
A formic acid (FA)-assisted sample preparation method was presented for protein identification via mass spectrometry (MS). Detailedly, an aqueous solution containing 2% FA and dithiothreitol was selected to perform protein denaturation, aspartic acid (D) sites cleavage and disulfide linkages reduction simultaneously at 108°C for 2h. Subsequently, FA wiped off via vacuum concentration. Finally, iodoacetamide (IAA) alkylation and trypsin digestion could be performed ordinally. A series of model proteins (BSA, β-lactoglobulin and apo-Transferrin) were treated respectively using such method, followed by matrix-assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF MS) analysis. The identified peptide number was increased by ∼ 80% in comparison with the conventional urea-assisted sample preparation method. Moreover, BSA identification was achieved efficiently down to femtomole (25 ± 0 sequence coverage and 16 ± 1 peptides) via such method. In contrast, there were not peptides identified confidently via the urea-assisted method before desalination via the C18 zip tip. The absence of urea in this sample preparation method was an advantage for the more favorable digestion and MALDI-TOF MS analysis. The performances of two methods for the real sample (rat liver proteome) were also compared, followed by a nanoflow reversed-phase liquid chromatography with electrospray ionization tandem mass spectrometry system analysis. As a result, 1335 ± 43 peptides were identified confidently (false discovery rate <1%) via FA-assisted method, corresponding to 295 ± 12 proteins (of top match=1 and requiring 2 unique peptides at least). In contrast, there were only 1107 ± 16 peptides (corresponding to 231 ± 10 proteins) obtained from the conventional urea-assisted method. It was serving as a more efficient protein sample preparation method for researching specific proteomes better, and providing assistance to develop other proteomics analysis methods, such as, peptide quantitative analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
Direct digestion of proteins in living cells into peptides for proteomic analysis.
Chen, Qi; Yan, Guoquan; Gao, Mingxia; Zhang, Xiangmin
2015-01-01
To analyze the proteome of an extremely low number of cells or even a single cell, we established a new method of digesting whole cells into mass-spectrometry-identifiable peptides in a single step within 2 h. Our sampling method greatly simplified the processes of cell lysis, protein extraction, protein purification, and overnight digestion, without compromising efficiency. We used our method to digest hundred-scale cells. As far as we know, there is no report of proteome analysis starting directly with as few as 100 cells. We identified an average of 109 proteins from 100 cells, and with three replicates, the number of proteins rose to 204. Good reproducibility was achieved, showing stability and reliability of the method. Gene Ontology analysis revealed that proteins in different cellular compartments were well represented.
Zheng, Lu; Gao, Naiyun; Deng, Yang
2012-01-01
It is difficult to isolate DNA from biological activated carbon (BAC) samples used in water treatment plants, owing to the scarcity of microorganisms in BAC samples. The aim of this study was to identify DNA extraction methods suitable for a long-term, comprehensive ecological analysis of BAC microbial communities. To identify a procedure that can produce high molecular weight DNA, maximizes detectable diversity and is relatively free from contaminants, the microwave extraction method, the cetyltrimethylammonium bromide (CTAB) extraction method, a commercial DNA extraction kit, and the ultrasonic extraction method were used for the extraction of DNA from BAC samples. Spectrophotometry, agarose gel electrophoresis and polymerase chain reaction (PCR)-restriction fragment length polymorphisms (RFLP) analysis were conducted to compare the yield and quality of DNA obtained using these methods. The results showed that the CTAB method produce the highest yield and genetic diversity of DNA from BAC samples, but DNA purity was slightly less than that obtained with the DNA extraction-kit method. This study provides a theoretical basis for establishing and selecting DNA extraction methods for BAC samples.
Babbitt, Patricia C.; Ferrin, Thomas E.
2017-01-01
Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially—MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method’s novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences. PMID:28187133
DNA analysis in Disaster Victim Identification.
Montelius, Kerstin; Lindblom, Bertil
2012-06-01
DNA profiling and matching is one of the primary methods to identify missing persons in a disaster, as defined by the Interpol Disaster Victim Identification Guide. The process to identify a victim by DNA includes: the collection of the best possible ante-mortem (AM) samples, the choice of post-mortem (PM) samples, DNA-analysis, matching and statistical weighting of the genetic relationship or match. Each disaster has its own scenario, and each scenario defines its own methods for identification of the deceased.
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.
2012-01-01
Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.
Cragun, Deborah; Pal, Tuya; Vadaparampil, Susan T.; Baldwin, Julie; Hampel, Heather; DeBate, Rita D.
2015-01-01
Qualitative comparative analysis (QCA) was developed over 25 years ago to bridge the qualitative and quantitative research gap. Upon searching PubMed and the Journal of Mixed Methods Research, this review identified 30 original research studies that utilized QCA. Perceptions that QCA is complex and provides few relative advantages over other methods may be limiting QCA adoption. Thus, to overcome these perceptions, this article demonstrates how to perform QCA using data from fifteen institutions that implemented universal tumor screening (UTS) programs to identify patients at high risk for hereditary colorectal cancer. In this example, QCA revealed a combination of conditions unique to effective UTS programs. Results informed additional research and provided a model for improving patient follow-through after a positive screen. PMID:27429602
Wright, Stuart J; Vass, Caroline M; Sim, Gene; Burton, Michael; Fiebig, Denzil G; Payne, Katherine
2018-02-28
Scale heterogeneity, or differences in the error variance of choices, may account for a significant amount of the observed variation in the results of discrete choice experiments (DCEs) when comparing preferences between different groups of respondents. The aim of this study was to identify if, and how, scale heterogeneity has been addressed in healthcare DCEs that compare the preferences of different groups. A systematic review identified all healthcare DCEs published between 1990 and February 2016. The full-text of each DCE was then screened to identify studies that compared preferences using data generated from multiple groups. Data were extracted and tabulated on year of publication, samples compared, tests for scale heterogeneity, and analytical methods to account for scale heterogeneity. Narrative analysis was used to describe if, and how, scale heterogeneity was accounted for when preferences were compared. A total of 626 healthcare DCEs were identified. Of these 199 (32%) aimed to compare the preferences of different groups specified at the design stage, while 79 (13%) compared the preferences of groups identified at the analysis stage. Of the 278 included papers, 49 (18%) discussed potential scale issues, 18 (7%) used a formal method of analysis to account for scale between groups, and 2 (1%) accounted for scale differences between preference groups at the analysis stage. Scale heterogeneity was present in 65% (n = 13) of studies that tested for it. Analytical methods to test for scale heterogeneity included coefficient plots (n = 5, 2%), heteroscedastic conditional logit models (n = 6, 2%), Swait and Louviere tests (n = 4, 1%), generalised multinomial logit models (n = 5, 2%), and scale-adjusted latent class analysis (n = 2, 1%). Scale heterogeneity is a prevalent issue in healthcare DCEs. Despite this, few published DCEs have discussed such issues, and fewer still have used formal methods to identify and account for the impact of scale heterogeneity. The use of formal methods to test for scale heterogeneity should be used, otherwise the results of DCEs potentially risk producing biased and potentially misleading conclusions regarding preferences for aspects of healthcare.
Statistical assessment of the learning curves of health technologies.
Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T
2001-01-01
(1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second was a case series of consecutive laparoscopic cholecystectomy procedures performed by ten surgeons; the third was randomised trial data derived from the laparoscopic procedure arm of a multicentre trial of groin hernia repair, supplemented by data from non-randomised operations performed during the trial. RESULTS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: Of 4571 abstracts identified, 272 (6%) were later included in the study after review of the full paper. Some 51% of studies assessed a surgical minimal access technique and 95% were case series. The statistical method used most often (60%) was splitting the data into consecutive parts (such as halves or thirds), with only 14% attempting a more formal statistical analysis. The reporting of the studies was poor, with 31% giving no details of data collection methods. RESULTS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: Of 9431 abstracts assessed, 115 (1%) were deemed appropriate for further investigation and, of these, 18 were included in the study. All of the methods for complex data sets were identified in the non-clinical literature. These were discriminant analysis, two-stage estimation of learning rates, generalised estimating equations, multilevel models, latent curve models, time series models and stochastic parameter models. In addition, eight new shapes of learning curves were identified. RESULTS - TESTING OF STATISTICAL METHODS: No one particular shape of learning curve performed significantly better than another. The performance of 'operation time' as a proxy for learning differed between the three procedures. Multilevel modelling using the laparoscopic cholecystectomy data demonstrated and measured surgeon-specific and confounding effects. The inclusion of non-randomised cases, despite the possible limitations of the method, enhanced the interpretation of learning effects. CONCLUSIONS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: The statistical methods used for assessing learning effects in health technology assessment have been crude and the reporting of studies poor. CONCLUSIONS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: A number of statistical methods for assessing learning effects were identified that had not hitherto been used in health technology assessment. There was a hierarchy of methods for the identification and measurement of learning, and the more sophisticated methods for both have had little if any use in health technology assessment. This demonstrated the value of considering fields outside clinical research when addressing methodological issues in health technology assessment. CONCLUSIONS - TESTING OF STATISTICAL METHODS: It has been demonstrated that the portfolio of techniques identified can enhance investigations of learning curve effects. (ABSTRACT TRUNCATED)
C4 Software Technology Reference Guide - A Prototype.
1997-01-10
domain analysis methods include • Feature-oriented domain analysis ( FODA ) (see pg. 185), a domain analysis method based upon identifying the... Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR-21, ADA 235785). Pittsburgh, PA: Software En- gineering Institute, Carnegie Mellon University, 1990. 178...domain analysis ( FODA ) (see pg. 185), in which a feature is a user-visible aspect or char- acteristic of the domain [Kang 90].) The features in a system
Operational modal analysis applied to the concert harp
NASA Astrophysics Data System (ADS)
Chomette, B.; Le Carrou, J.-L.
2015-05-01
Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.
Jo, Kyuri; Jung, Inuk; Moon, Ji Hwan; Kim, Sun
2016-01-01
Motivation: To understand the dynamic nature of the biological process, it is crucial to identify perturbed pathways in an altered environment and also to infer regulators that trigger the response. Current time-series analysis methods, however, are not powerful enough to identify perturbed pathways and regulators simultaneously. Widely used methods include methods to determine gene sets such as differentially expressed genes or gene clusters and these genes sets need to be further interpreted in terms of biological pathways using other tools. Most pathway analysis methods are not designed for time series data and they do not consider gene-gene influence on the time dimension. Results: In this article, we propose a novel time-series analysis method TimeTP for determining transcription factors (TFs) regulating pathway perturbation, which narrows the focus to perturbed sub-pathways and utilizes the gene regulatory network and protein–protein interaction network to locate TFs triggering the perturbation. TimeTP first identifies perturbed sub-pathways that propagate the expression changes along the time. Starting points of the perturbed sub-pathways are mapped into the network and the most influential TFs are determined by influence maximization technique. The analysis result is visually summarized in TF-Pathway map in time clock. TimeTP was applied to PIK3CA knock-in dataset and found significant sub-pathways and their regulators relevant to the PIP3 signaling pathway. Availability and Implementation: TimeTP is implemented in Python and available at http://biohealth.snu.ac.kr/software/TimeTP/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: sunkim.bioinfo@snu.ac.kr PMID:27307609
Guetterman, Timothy C; Fetters, Michael D; Creswell, John W
2015-11-01
Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.
Analysis of Parasite and Other Skewed Counts
Alexander, Neal
2012-01-01
Objective To review methods for the statistical analysis of parasite and other skewed count data. Methods Statistical methods for skewed count data are described and compared, with reference to those used over a ten year period of Tropical Medicine and International Health. Two parasitological datasets are used for illustration. Results Ninety papers were identified, 89 with descriptive and 60 with inferential analysis. A lack of clarity is noted in identifying measures of location, in particular the Williams and geometric mean. The different measures are compared, emphasizing the legitimacy of the arithmetic mean for skewed data. In the published papers, the t test and related methods were often used on untransformed data, which is likely to be invalid. Several approaches to inferential analysis are described, emphasizing 1) non-parametric methods, while noting that they are not simply comparisons of medians, and 2) generalized linear modelling, in particular with the negative binomial distribution. Additional methods, such as the bootstrap, with potential for greater use are described. Conclusions Clarity is recommended when describing transformations and measures of location. It is suggested that non-parametric methods and generalized linear models are likely to be sufficient for most analyses. PMID:22943299
Temperature analysis with voltage-current time differential operation of electrochemical sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay
A method for temperature analysis of a gas stream. The method includes identifying a temperature parameter of an affected waveform signal. The method also includes calculating a change in the temperature parameter by comparing the affected waveform signal with an original waveform signal. The method also includes generating a value from the calculated change which corresponds to the temperature of the gas stream.
Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun
2017-09-01
Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.
1,4-Dioxane has been identified as a probable human carcinogen and an emerging contaminant in drinking water. The National Exposure Research Laboratory (NERL) has developed a method for the analysis of 1,4-dioxane in drinking water at ng/L concentrations. The method consists of...
CADDIS Volume 4. Data Analysis: Basic Principles & Issues
Use of inferential statistics in causal analysis, introduction to data independence and autocorrelation, methods to identifying and control for confounding variables, references for the Basic Principles section of Data Analysis.
Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.
Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe
2014-01-01
The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
Wei, Q; Hu, Y
2009-01-01
The major hurdle for segmenting lung lobes in computed tomographic (CT) images is to identify fissure regions, which encase lobar fissures. Accurate identification of these regions is difficult due to the variable shape and appearance of the fissures, along with the low contrast and high noise associated with CT images. This paper studies the effectiveness of two texture analysis methods - the gray level co-occurrence matrix (GLCM) and the gray level run length matrix (GLRLM) - in identifying fissure regions from isotropic CT image stacks. To classify GLCM and GLRLM texture features, we applied a feed-forward back-propagation neural network and achieved the best classification accuracy utilizing 16 quantized levels for computing the GLCM and GLRLM texture features and 64 neurons in the input/hidden layers of the neural network. Tested on isotropic CT image stacks of 24 patients with the pathologic lungs, we obtained accuracies of 86% and 87% for identifying fissure regions using the GLCM and GLRLM methods, respectively. These accuracies compare favorably with surgeons/radiologists' accuracy of 80% for identifying fissure regions in clinical settings. This shows promising potential for segmenting lung lobes using the GLCM and GLRLM methods.
Elumalai, Vetrimurugan; Brindha, K; Sithole, Bongani; Lakshmanan, Elango
2017-04-01
Mapping groundwater contaminants and identifying the sources are the initial steps in pollution control and mitigation. Due to the availability of different mapping methods and the large number of emerging pollutants, these methods need to be used together in decision making. The present study aims to map the contaminated areas in Richards Bay, South Africa and compare the results of ordinary kriging (OK) and inverse distance weighted (IDW) interpolation techniques. Statistical methods were also used for identifying contamination sources. Na-Cl groundwater type was dominant followed by Ca-Mg-Cl. Data analysis indicate that silicate weathering, ion exchange and fresh water-seawater mixing are the major geochemical processes controlling the presence of major ions in groundwater. Factor analysis also helped to confirm the results. Overlay analysis by OK and IDW gave different results. Areas where groundwater was unsuitable as a drinking source were 419 and 116 km 2 for OK and IDW, respectively. Such diverse results make decision making difficult, if only one method was to be used. Three highly contaminated zones within the study area were more accurately identified by OK. If large areas are identified as being contaminated such as by IDW in this study, the mitigation measures will be expensive. If these areas were underestimated, then even though management measures are taken, it will not be effective for a longer time. Use of multiple techniques like this study will help to avoid taking harsh decisions. Overall, the groundwater quality in this area was poor, and it is essential to identify alternate drinking water source or treat the groundwater before ingestion.
Zhang, Xiaohua Douglas; Yang, Xiting Cindy; Chung, Namjin; Gates, Adam; Stec, Erica; Kunapuli, Priya; Holder, Dan J; Ferrer, Marc; Espeseth, Amy S
2006-04-01
RNA interference (RNAi) high-throughput screening (HTS) experiments carried out using large (>5000 short interfering [si]RNA) libraries generate a huge amount of data. In order to use these data to identify the most effective siRNAs tested, it is critical to adopt and develop appropriate statistical methods. To address the questions in hit selection of RNAi HTS, we proposed a quartile-based method which is robust to outliers, true hits and nonsymmetrical data. We compared it with the more traditional tests, mean +/- k standard deviation (SD) and median +/- 3 median of absolute deviation (MAD). The results suggested that the quartile-based method selected more hits than mean +/- k SD under the same preset error rate. The number of hits selected by median +/- k MAD was close to that by the quartile-based method. Further analysis suggested that the quartile-based method had the greatest power in detecting true hits, especially weak or moderate true hits. Our investigation also suggested that platewise analysis (determining effective siRNAs on a plate-by-plate basis) can adjust for systematic errors in different plates, while an experimentwise analysis, in which effective siRNAs are identified in an analysis of the entire experiment, cannot. However, experimentwise analysis may detect a cluster of true positive hits placed together in one or several plates, while platewise analysis may not. To display hit selection results, we designed a specific figure called a plate-well series plot. We thus suggest the following strategy for hit selection in RNAi HTS experiments. First, choose the quartile-based method, or median +/- k MAD, for identifying effective siRNAs. Second, perform the chosen method experimentwise on transformed/normalized data, such as percentage inhibition, to check the possibility of hit clusters. If a cluster of selected hits are observed, repeat the analysis based on untransformed data to determine whether the cluster is due to an artifact in the data. If no clusters of hits are observed, select hits by performing platewise analysis on transformed data. Third, adopt the plate-well series plot to visualize both the data and the hit selection results, as well as to check for artifacts.
González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio
2015-03-01
A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.
COLD-SAT feasibility study safety analysis
NASA Technical Reports Server (NTRS)
Mchenry, Steven T.; Yost, James M.
1991-01-01
The Cryogenic On-orbit Liquid Depot-Storage, Acquisition, and Transfer (COLD-SAT) satellite presents some unique safety issues. The feasibility study conducted at NASA-Lewis desired a systems safety program that would be involved from the initial design in order to eliminate and/or control the inherent hazards. Because of this, a hazards analysis method was needed that: (1) identified issues that needed to be addressed for a feasibility assessment; and (2) identified all potential hazards that would need to be controlled and/or eliminated during the detailed design phases. The developed analysis method is presented as well as the results generated for the COLD-SAT system.
Combined magnetic and gravity analysis
NASA Technical Reports Server (NTRS)
Hinze, W. J.; Braile, L. W.; Chandler, V. W.; Mazella, F. E.
1975-01-01
Efforts are made to identify methods of decreasing magnetic interpretation ambiguity by combined gravity and magnetic analysis, to evaluate these techniques in a preliminary manner, to consider the geologic and geophysical implications of correlation, and to recommend a course of action to evaluate methods of correlating gravity and magnetic anomalies. The major thrust of the study was a search and review of the literature. The literature of geophysics, geology, geography, and statistics was searched for articles dealing with spatial correlation of independent variables. An annotated bibliography referencing the Germane articles and books is presented. The methods of combined gravity and magnetic analysis techniques are identified and reviewed. A more comprehensive evaluation of two types of techniques is presented. Internal correspondence of anomaly amplitudes is examined and a combined analysis is done utilizing Poisson's theorem. The geologic and geophysical implications of gravity and magnetic correlation based on both theoretical and empirical relationships are discussed.
Soul, Jamie; Hardingham, Timothy E; Boot-Handford, Raymond P; Schwartz, Jean-Marc
2015-01-29
We describe a new method, PhenomeExpress, for the analysis of transcriptomic datasets to identify pathogenic disease mechanisms. Our analysis method includes input from both protein-protein interaction and phenotype similarity networks. This introduces valuable information from disease relevant phenotypes, which aids the identification of sub-networks that are significantly enriched in differentially expressed genes and are related to the disease relevant phenotypes. This contrasts with many active sub-network detection methods, which rely solely on protein-protein interaction networks derived from compounded data of many unrelated biological conditions and which are therefore not specific to the context of the experiment. PhenomeExpress thus exploits readily available animal model and human disease phenotype information. It combines this prior evidence of disease phenotypes with the experimentally derived disease data sets to provide a more targeted analysis. Two case studies, in subchondral bone in osteoarthritis and in Pax5 in acute lymphoblastic leukaemia, demonstrate that PhenomeExpress identifies core disease pathways in both mouse and human disease expression datasets derived from different technologies. We also validate the approach by comparison to state-of-the-art active sub-network detection methods, which reveals how it may enhance the detection of molecular phenotypes and provide a more detailed context to those previously identified as possible candidates.
Fu, Haiyan; Fan, Yao; Zhang, Xu; Lan, Hanyue; Yang, Tianming; Shao, Mei; Li, Sihan
2015-01-01
As an effective method, the fingerprint technique, which emphasized the whole compositions of samples, has already been used in various fields, especially in identifying and assessing the quality of herbal medicines. High-performance liquid chromatography (HPLC) and near-infrared (NIR), with their unique characteristics of reliability, versatility, precision, and simple measurement, played an important role among all the fingerprint techniques. In this paper, a supervised pattern recognition method based on PLSDA algorithm by HPLC and NIR has been established to identify the information of Hibiscus mutabilis L. and Berberidis radix, two common kinds of herbal medicines. By comparing component analysis (PCA), linear discriminant analysis (LDA), and particularly partial least squares discriminant analysis (PLSDA) with different fingerprint preprocessing of NIR spectra variables, PLSDA model showed perfect functions on the analysis of samples as well as chromatograms. Most important, this pattern recognition method by HPLC and NIR can be used to identify different collection parts, collection time, and different origins or various species belonging to the same genera of herbal medicines which proved to be a promising approach for the identification of complex information of herbal medicines. PMID:26345990
Applying mixed methods to pretest the Pressure Ulcer Quality of Life (PU-QOL) instrument.
Gorecki, C; Lamping, D L; Nixon, J; Brown, J M; Cano, S
2012-04-01
Pretesting is key in the development of patient-reported outcome (PRO) instruments. We describe a mixed-methods approach based on interviews and Rasch measurement methods in the pretesting of the Pressure Ulcer Quality of Life (PU-QOL) instrument. We used cognitive interviews to pretest the PU-QOL in 35 patients with pressure ulcers with the view to identifying problematic items, followed by Rasch analysis to examine response options, appropriateness of the item series and biases due to question ordering (item fit). We then compared findings in an interactive and iterative process to identify potential strengths and weaknesses of PU-QOL items, and guide decision-making about further revisions to items and design/layout. Although cognitive interviews largely supported items, they highlighted problems with layout, response options and comprehension. Findings from the Rasch analysis identified problems with response options through reversed thresholds. The use of a mixed-methods approach in pretesting the PU-QOL instrument proved beneficial for identifying problems with scale layout, response options and framing/wording of items. Rasch measurement methods are a useful addition to standard qualitative pretesting for evaluating strengths and weaknesses of early stage PRO instruments.
Revisiting a Meta-Analysis of Helpful Aspects of Therapy in a Community Counselling Service
ERIC Educational Resources Information Center
Quick, Emma L; Dowd, Claire; Spong, Sheila
2018-01-01
This small scale mixed methods study examines helpful events in a community counselling setting, categorising impacts of events according to Timulak's [(2007). Identifying core categories of client-identified impact of helpful events in psychotherapy: A qualitative meta-analysis. "Psychotherapy Research," 17, 305-314] meta-synthesis of…
Method of analysis of asbestiform minerals by thermoluminescence
Fisher, Gerald L.; Bradley, Edward W.
1980-01-01
A method for the qualitative and quantitative analysis of asbestiform minerals, including the steps of subjecting a sample to be analyzed to the thermoluminescent analysis, annealing the sample, subjecting the sample to ionizing radiation, and subjecting the sample to a second thermoluminescent analysis. Glow curves are derived from the two thermoluminescent analyses and their shapes then compared to established glow curves of known asbestiform minerals to identify the type of asbestiform in the sample. Also, during at least one of the analyses, the thermoluminescent response for each sample is integrated during a linear heating period of the analysis in order to derive the total thermoluminescence per milligram of sample. This total is a measure of the quantity of asbestiform in the sample and may also be used to identify the source of the sample.
Identifying city PV roof resource based on Gabor filter
NASA Astrophysics Data System (ADS)
Ruhang, Xu; Zhilin, Liu; Yong, Huang; Xiaoyu, Zhang
2017-06-01
To identify a city’s PV roof resources, the area and ownership distribution of residential buildings in an urban district should be assessed. To achieve this assessment, remote sensing data analysing is a promising approach. Urban building roof area estimation is a major topic for remote sensing image information extraction. There are normally three ways to solve this problem. The first way is pixel-based analysis, which is based on mathematical morphology or statistical methods; the second way is object-based analysis, which is able to combine semantic information and expert knowledge; the third way is signal-processing view method. This paper presented a Gabor filter based method. This result shows that the method is fast and with proper accuracy.
Similarity of markers identified from cancer gene expression studies: observations from GEO.
Shi, Xingjie; Shen, Shihao; Liu, Jin; Huang, Jian; Zhou, Yong; Ma, Shuangge
2014-09-01
Gene expression profiling has been extensively conducted in cancer research. The analysis of multiple independent cancer gene expression datasets may provide additional information and complement single-dataset analysis. In this study, we conduct multi-dataset analysis and are interested in evaluating the similarity of cancer-associated genes identified from different datasets. The first objective of this study is to briefly review some statistical methods that can be used for such evaluation. Both marginal analysis and joint analysis methods are reviewed. The second objective is to apply those methods to 26 Gene Expression Omnibus (GEO) datasets on five types of cancers. Our analysis suggests that for the same cancer, the marker identification results may vary significantly across datasets, and different datasets share few common genes. In addition, datasets on different cancers share few common genes. The shared genetic basis of datasets on the same or different cancers, which has been suggested in the literature, is not observed in the analysis of GEO data. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
4C-ker: A Method to Reproducibly Identify Genome-Wide Interactions Captured by 4C-Seq Experiments.
Raviram, Ramya; Rocha, Pedro P; Müller, Christian L; Miraldi, Emily R; Badri, Sana; Fu, Yi; Swanzey, Emily; Proudhon, Charlotte; Snetkova, Valentina; Bonneau, Richard; Skok, Jane A
2016-03-01
4C-Seq has proven to be a powerful technique to identify genome-wide interactions with a single locus of interest (or "bait") that can be important for gene regulation. However, analysis of 4C-Seq data is complicated by the many biases inherent to the technique. An important consideration when dealing with 4C-Seq data is the differences in resolution of signal across the genome that result from differences in 3D distance separation from the bait. This leads to the highest signal in the region immediately surrounding the bait and increasingly lower signals in far-cis and trans. Another important aspect of 4C-Seq experiments is the resolution, which is greatly influenced by the choice of restriction enzyme and the frequency at which it can cut the genome. Thus, it is important that a 4C-Seq analysis method is flexible enough to analyze data generated using different enzymes and to identify interactions across the entire genome. Current methods for 4C-Seq analysis only identify interactions in regions near the bait or in regions located in far-cis and trans, but no method comprehensively analyzes 4C signals of different length scales. In addition, some methods also fail in experiments where chromatin fragments are generated using frequent cutter restriction enzymes. Here, we describe 4C-ker, a Hidden-Markov Model based pipeline that identifies regions throughout the genome that interact with the 4C bait locus. In addition, we incorporate methods for the identification of differential interactions in multiple 4C-seq datasets collected from different genotypes or experimental conditions. Adaptive window sizes are used to correct for differences in signal coverage in near-bait regions, far-cis and trans chromosomes. Using several datasets, we demonstrate that 4C-ker outperforms all existing 4C-Seq pipelines in its ability to reproducibly identify interaction domains at all genomic ranges with different resolution enzymes.
4C-ker: A Method to Reproducibly Identify Genome-Wide Interactions Captured by 4C-Seq Experiments
Raviram, Ramya; Rocha, Pedro P.; Müller, Christian L.; Miraldi, Emily R.; Badri, Sana; Fu, Yi; Swanzey, Emily; Proudhon, Charlotte; Snetkova, Valentina
2016-01-01
4C-Seq has proven to be a powerful technique to identify genome-wide interactions with a single locus of interest (or “bait”) that can be important for gene regulation. However, analysis of 4C-Seq data is complicated by the many biases inherent to the technique. An important consideration when dealing with 4C-Seq data is the differences in resolution of signal across the genome that result from differences in 3D distance separation from the bait. This leads to the highest signal in the region immediately surrounding the bait and increasingly lower signals in far-cis and trans. Another important aspect of 4C-Seq experiments is the resolution, which is greatly influenced by the choice of restriction enzyme and the frequency at which it can cut the genome. Thus, it is important that a 4C-Seq analysis method is flexible enough to analyze data generated using different enzymes and to identify interactions across the entire genome. Current methods for 4C-Seq analysis only identify interactions in regions near the bait or in regions located in far-cis and trans, but no method comprehensively analyzes 4C signals of different length scales. In addition, some methods also fail in experiments where chromatin fragments are generated using frequent cutter restriction enzymes. Here, we describe 4C-ker, a Hidden-Markov Model based pipeline that identifies regions throughout the genome that interact with the 4C bait locus. In addition, we incorporate methods for the identification of differential interactions in multiple 4C-seq datasets collected from different genotypes or experimental conditions. Adaptive window sizes are used to correct for differences in signal coverage in near-bait regions, far-cis and trans chromosomes. Using several datasets, we demonstrate that 4C-ker outperforms all existing 4C-Seq pipelines in its ability to reproducibly identify interaction domains at all genomic ranges with different resolution enzymes. PMID:26938081
Martens, Brian K; DiGennaro, Florence D; Reed, Derek D; Szczech, Frances M; Rosenthal, Blair D
2008-01-01
Descriptive assessment methods have been used in applied settings to identify consequences for problem behavior, thereby aiding in the design of effective treatment programs. Consensus has not been reached, however, regarding the types of data or analytic strategies that are most useful for describing behavior–consequence relations. One promising approach involves the analysis of conditional probabilities from sequential recordings of behavior and events that follow its occurrence. In this paper we review several strategies for identifying contingent relations from conditional probabilities, and propose an alternative strategy known as a contingency space analysis (CSA). Step-by-step procedures for conducting and interpreting a CSA using sample data are presented, followed by discussion of the potential use of a CSA for conducting descriptive assessments, informing intervention design, and evaluating changes in reinforcement contingencies following treatment. PMID:18468280
NASA Astrophysics Data System (ADS)
Wang, Z.; Quek, S. T.
2015-07-01
Performance of any structural health monitoring algorithm relies heavily on good measurement data. Hence, it is necessary to employ robust faulty sensor detection approaches to isolate sensors with abnormal behaviour and exclude the highly inaccurate data in the subsequent analysis. The independent component analysis (ICA) is implemented to detect the presence of sensors showing abnormal behaviour. A normalized form of the relative partial decomposition contribution (rPDC) is proposed to identify the faulty sensor. Both additive and multiplicative types of faults are addressed and the detectability illustrated using a numerical and an experimental example. An empirical method to establish control limits for detecting and identifying the type of fault is also proposed. The results show the effectiveness of the ICA and rPDC method in identifying faulty sensor assuming that baseline cases are available.
Quantitation of absorbed or deposited materials on a substrate that measures energy deposition
Grant, Patrick G.; Bakajin, Olgica; Vogel, John S.; Bench, Graham
2005-01-18
This invention provides a system and method for measuring an energy differential that correlates to quantitative measurement of an amount mass of an applied localized material. Such a system and method remains compatible with other methods of analysis, such as, for example, quantitating the elemental or isotopic content, identifying the material, or using the material in biochemical analysis.
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
ERIC Educational Resources Information Center
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
Byers, Helen; Wallis, Yvonne; van Veen, Elke M; Lalloo, Fiona; Reay, Kim; Smith, Philip; Wallace, Andrew J; Bowers, Naomi; Newman, William G; Evans, D Gareth
2016-11-01
The sensitivity of testing BRCA1 and BRCA2 remains unresolved as the frequency of deep intronic splicing variants has not been defined in high-risk familial breast/ovarian cancer families. This variant category is reported at significant frequency in other tumour predisposition genes, including NF1 and MSH2. We carried out comprehensive whole gene RNA analysis on 45 high-risk breast/ovary and male breast cancer families with no identified pathogenic variant on exonic sequencing and copy number analysis of BRCA1/2. In addition, we undertook variant screening of a 10-gene high/moderate risk breast/ovarian cancer panel by next-generation sequencing. DNA testing identified the causative variant in 50/56 (89%) breast/ovarian/male breast cancer families with Manchester scores of ≥50 with two variants being confirmed to affect splicing on RNA analysis. RNA sequencing of BRCA1/BRCA2 on 45 individuals from high-risk families identified no deep intronic variants and did not suggest loss of RNA expression as a cause of lost sensitivity. Panel testing in 42 samples identified a known RAD51D variant, a high-risk ATM variant in another breast ovary family and a truncating CHEK2 mutation. Current exonic sequencing and copy number analysis variant detection methods of BRCA1/2 have high sensitivity in high-risk breast/ovarian cancer families. Sequence analysis of RNA does not identify any variants undetected by current analysis of BRCA1/2. However, RNA analysis clarified the pathogenicity of variants of unknown significance detected by current methods. The low diagnostic uplift achieved through sequence analysis of the other known breast/ovarian cancer susceptibility genes indicates that further high-risk genes remain to be identified.
Liu, Ming; Zhao, Jing; Lu, XiaoZuo; Li, Gang; Wu, Taixia; Zhang, LiFu
2018-05-10
With spectral methods, noninvasive determination of blood hyperviscosity in vivo is very potential and meaningful in clinical diagnosis. In this study, 67 male subjects (41 health, and 26 hyperviscosity according to blood sample analysis results) participate. Reflectance spectra of subjects' tongue tips is measured, and a classification method bases on principal component analysis combined with artificial neural network model is built to identify hyperviscosity. Hold-out and Leave-one-out methods are used to avoid significant bias and lessen overfitting problem, which are widely accepted in the model validation. To measure the performance of the classification, sensitivity, specificity, accuracy and F-measure are calculated, respectively. The accuracies with 100 times Hold-out method and 67 times Leave-one-out method are 88.05% and 97.01%, respectively. Experimental results indicate that the built classification model has certain practical value and proves the feasibility of using spectroscopy to identify hyperviscosity by noninvasive determination.
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Comparison of Gap Elements and Contact Algorithm for 3D Contact Analysis of Spiral Bevel Gears
NASA Technical Reports Server (NTRS)
Bibel, G. D.; Tiku, K.; Kumar, A.; Handschuh, R.
1994-01-01
Three dimensional stress analysis of spiral bevel gears in mesh using the finite element method is presented. A finite element model is generated by solving equations that identify tooth surface coordinates. Contact is simulated by the automatic generation of nonpenetration constraints. This method is compared to a finite element contact analysis conducted with gap elements.
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro
2016-01-01
The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used.
FECAL SOURCE TRACKING BY ANTIBIOTIC RESISTANCE ANALYSIS ON A WATERSHED EXHIBITING LOW RESISTANCE
The ongoing development of microbial source tracking has made it possible to identify contamination sources with varying accuracy, depending on the method used. The purpose of this study was done to test the efficiency of the antibiotic resistance analysis (ARA) method under low ...
Estimating optical imaging system performance for space applications
NASA Technical Reports Server (NTRS)
Sinclair, K. F.
1972-01-01
The critical system elements of an optical imaging system are identified and a method for an initial assessment of system performance is presented. A generalized imaging system is defined. A system analysis is considered, followed by a component analysis. An example of the method is given using a film imaging system.
Overview of mycotoxin methods, present status and future needs.
Gilbert, J
1999-01-01
This article reviews current requirements for the analysis for mycotoxins in foods and identifies legislative as well as other factors that are driving development and validation of new methods. New regulatory limits for mycotoxins and analytical quality assurance requirements for laboratories to only use validated methods are seen as major factors driving developments. Three major classes of methods are identified which serve different purposes and can be categorized as screening, official and research. In each case the present status and future needs are assessed. In addition to an overview of trends in analytical methods, some other areas of analytical quality assurance such as participation in proficiency testing and reference materials are identified.
1976-09-01
The purpose of this research effort was to determine the financial management educational needs of USAF graduate logistics positions. Goal analysis...was used to identify financial management techniques and task analysis was used to develop a method to identify the use of financial management techniques...positions. The survey identified financial management techniques in five areas: cost accounting, capital budgeting, working capital, financial forecasting, and programming. (Author)
Estimating Water Levels with Google Earth Engine
NASA Astrophysics Data System (ADS)
Lucero, E.; Russo, T. A.; Zentner, M.; May, J.; Nguy-Robertson, A. L.
2016-12-01
Reservoirs serve multiple functions and are vital for storage, electricity generation, and flood control. For many areas, traditional ground-based reservoir measurements may not be available or data dissemination may be problematic. Consistent monitoring of reservoir levels in data-poor areas can be achieved through remote sensing, providing information to researchers and the international community. Estimates of trends and relative reservoir volume can be used to identify water supply vulnerability, anticipate low power generation, and predict flood risk. Image processing with automated cloud computing provides opportunities to study multiple geographic areas in near real-time. We demonstrate the prediction capability of a cloud environment for identifying water trends at reservoirs in the US, and then apply the method to data-poor areas in North Korea, Iran, Azerbaijan, Zambia, and India. The Google Earth Engine cloud platform hosts remote sensing data and can be used to automate reservoir level estimation with multispectral imagery. We combine automated cloud-based analysis from Landsat image classification to identify reservoir surface area trends and radar altimetry to identify reservoir level trends. The study estimates water level trends using three years of data from four domestic reservoirs to validate the remote sensing method, and five foreign reservoirs to demonstrate the method application. We report correlations between ground-based reservoir level measurements in the US and our remote sensing methods, and correlations between the cloud analysis and altimetry data for reservoirs in data-poor areas. The availability of regular satellite imagery and an automated, near real-time application method provides the necessary datasets for further temporal analysis, reservoir modeling, and flood forecasting. All statements of fact, analysis, or opinion are those of the author and do not reflect the official policy or position of the Department of Defense or any of its components or the U.S. Government
Gurau, Oana; Bosl, William J.; Newton, Charles R.
2017-01-01
Autism spectrum disorders (ASD) are thought to be associated with abnormal neural connectivity. Presently, neural connectivity is a theoretical construct that cannot be easily measured. Research in network science and time series analysis suggests that neural network structure, a marker of neural activity, can be measured with electroencephalography (EEG). EEG can be quantified by different methods of analysis to potentially detect brain abnormalities. The aim of this review is to examine evidence for the utility of three methods of EEG signal analysis in the ASD diagnosis and subtype delineation. We conducted a review of literature in which 40 studies were identified and classified according to the principal method of EEG analysis in three categories: functional connectivity analysis, spectral power analysis, and information dynamics. All studies identified significant differences between ASD patients and non-ASD subjects. However, due to high heterogeneity in the results, generalizations could not be inferred and none of the methods alone are currently useful as a new diagnostic tool. The lack of studies prevented the analysis of these methods as tools for ASD subtypes delineation. These results confirm EEG abnormalities in ASD, but as yet not sufficient to help in the diagnosis. Future research with larger samples and more robust study designs could allow for higher sensitivity and consistency in characterizing ASD, paving the way for developing new means of diagnosis. PMID:28747892
Structure identification methods for atomistic simulations of crystalline materials
Stukowski, Alexander
2012-05-28
Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.
Bonan, Brigitte; Martelli, Nicolas; Berhoune, Malik; Maestroni, Marie-Laure; Havard, Laurent; Prognon, Patrice
2009-02-01
To apply the Hazard analysis and Critical Control Points method to the preparation of anti-cancer drugs. To identify critical control points in our cancer chemotherapy process and to propose control measures and corrective actions to manage these processes. The Hazard Analysis and Critical Control Points application began in January 2004 in our centralized chemotherapy compounding unit. From October 2004 to August 2005, monitoring of the process nonconformities was performed to assess the method. According to the Hazard Analysis and Critical Control Points method, a multidisciplinary team was formed to describe and assess the cancer chemotherapy process. This team listed all of the critical points and calculated their risk indexes according to their frequency of occurrence, their severity and their detectability. The team defined monitoring, control measures and corrective actions for each identified risk. Finally, over a 10-month period, pharmacists reported each non-conformity of the process in a follow-up document. Our team described 11 steps in the cancer chemotherapy process. The team identified 39 critical control points, including 11 of higher importance with a high-risk index. Over 10 months, 16,647 preparations were performed; 1225 nonconformities were reported during this same period. The Hazard Analysis and Critical Control Points method is relevant when it is used to target a specific process such as the preparation of anti-cancer drugs. This method helped us to focus on the production steps, which can have a critical influence on product quality, and led us to improve our process.
Sherman, Recinda L; Henry, Kevin A; Tannenbaum, Stacey L; Feaster, Daniel J; Kobetz, Erin; Lee, David J
2014-03-20
Epidemiologists are gradually incorporating spatial analysis into health-related research as geocoded cases of disease become widely available and health-focused geospatial computer applications are developed. One health-focused application of spatial analysis is cluster detection. Using cluster detection to identify geographic areas with high-risk populations and then screening those populations for disease can improve cancer control. SaTScan is a free cluster-detection software application used by epidemiologists around the world to describe spatial clusters of infectious and chronic disease, as well as disease vectors and risk factors. The objectives of this article are to describe how spatial analysis can be used in cancer control to detect geographic areas in need of colorectal cancer screening intervention, identify issues commonly encountered by SaTScan users, detail how to select the appropriate methods for using SaTScan, and explain how method selection can affect results. As an example, we used various methods to detect areas in Florida where the population is at high risk for late-stage diagnosis of colorectal cancer. We found that much of our analysis was underpowered and that no single method detected all clusters of statistical or public health significance. However, all methods detected 1 area as high risk; this area is potentially a priority area for a screening intervention. Cluster detection can be incorporated into routine public health operations, but the challenge is to identify areas in which the burden of disease can be alleviated through public health intervention. Reliance on SaTScan's default settings does not always produce pertinent results.
Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348
Rajani, Vishaal; Carrero, Gustavo; Golan, David E.; de Vries, Gerda; Cairo, Christopher W.
2011-01-01
The diffusion of receptors within the two-dimensional environment of the plasma membrane is a complex process. Although certain components diffuse according to a random walk model (Brownian diffusion), an overwhelming body of work has found that membrane diffusion is nonideal (anomalous diffusion). One of the most powerful methods for studying membrane diffusion is single particle tracking (SPT), which records the trajectory of a label attached to a membrane component of interest. One of the outstanding problems in SPT is the analysis of data to identify the presence of heterogeneity. We have adapted a first-passage time (FPT) algorithm, originally developed for the interpretation of animal movement, for the analysis of SPT data. We discuss the general application of the FPT analysis to molecular diffusion, and use simulations to test the method against data containing known regions of confinement. We conclude that FPT can be used to identify the presence and size of confinement within trajectories of the receptor LFA-1, and these results are consistent with previous reports on the size of LFA-1 clusters. The analysis of trajectory data for cell surface receptors by FPT provides a robust method to determine the presence and size of confined regions of diffusion. PMID:21402028
Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou
2017-01-01
Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.
Tissue Non-Specific Genes and Pathways Associated with Diabetes: An Expression Meta-Analysis.
Mei, Hao; Li, Lianna; Liu, Shijian; Jiang, Fan; Griswold, Michael; Mosley, Thomas
2017-01-21
We performed expression studies to identify tissue non-specific genes and pathways of diabetes by meta-analysis. We searched curated datasets of the Gene Expression Omnibus (GEO) database and identified 13 and five expression studies of diabetes and insulin responses at various tissues, respectively. We tested differential gene expression by empirical Bayes-based linear method and investigated gene set expression association by knowledge-based enrichment analysis. Meta-analysis by different methods was applied to identify tissue non-specific genes and gene sets. We also proposed pathway mapping analysis to infer functions of the identified gene sets, and correlation and independent analysis to evaluate expression association profile of genes and gene sets between studies and tissues. Our analysis showed that PGRMC1 and HADH genes were significant over diabetes studies, while IRS1 and MPST genes were significant over insulin response studies, and joint analysis showed that HADH and MPST genes were significant over all combined data sets. The pathway analysis identified six significant gene sets over all studies. The KEGG pathway mapping indicated that the significant gene sets are related to diabetes pathogenesis. The results also presented that 12.8% and 59.0% pairwise studies had significantly correlated expression association for genes and gene sets, respectively; moreover, 12.8% pairwise studies had independent expression association for genes, but no studies were observed significantly different for expression association of gene sets. Our analysis indicated that there are both tissue specific and non-specific genes and pathways associated with diabetes pathogenesis. Compared to the gene expression, pathway association tends to be tissue non-specific, and a common pathway influencing diabetes development is activated through different genes at different tissues.
Zhang, Jiang; Liu, Qi; Chen, Huafu; Yuan, Zhen; Huang, Jin; Deng, Lihua; Lu, Fengmei; Zhang, Junpeng; Wang, Yuqing; Wang, Mingwen; Chen, Liangyin
2015-01-01
Clustering analysis methods have been widely applied to identifying the functional brain networks of a multitask paradigm. However, the previously used clustering analysis techniques are computationally expensive and thus impractical for clinical applications. In this study a novel method, called SOM-SAPC that combines self-organizing mapping (SOM) and supervised affinity propagation clustering (SAPC), is proposed and implemented to identify the motor execution (ME) and motor imagery (MI) networks. In SOM-SAPC, SOM was first performed to process fMRI data and SAPC is further utilized for clustering the patterns of functional networks. As a result, SOM-SAPC is able to significantly reduce the computational cost for brain network analysis. Simulation and clinical tests involving ME and MI were conducted based on SOM-SAPC, and the analysis results indicated that functional brain networks were clearly identified with different response patterns and reduced computational cost. In particular, three activation clusters were clearly revealed, which include parts of the visual, ME and MI functional networks. These findings validated that SOM-SAPC is an effective and robust method to analyze the fMRI data with multitasks.
Method to Identify Deep Cases Based on Relationships between Nouns, Verbs, and Particles
ERIC Educational Resources Information Center
Ide, Daisuke; Kimura, Masaomi
2016-01-01
Deep cases representing the significant meaning of nouns in sentences play a crucial role in semantic analysis. However, a case tends to be manually identified because it requires understanding the meaning and relationships of words. To address this problem, we propose a method to predict deep cases by analyzing the relationship between nouns,…
Mumma, Matthew A; Soulliere, Colleen E; Mahoney, Shane P; Waits, Lisette P
2014-01-01
Predator species identification is an important step in understanding predator-prey interactions, but predator identifications using kill site observations are often unreliable. We used molecular tools to analyse predator saliva, scat and hair from caribou calf kills in Newfoundland, Canada to identify the predator species, individual and sex. We sampled DNA from 32 carcasses using cotton swabs to collect predator saliva. We used fragment length analysis and sequencing of mitochondrial DNA to distinguish between coyote, black bear, Canada lynx and red fox and used nuclear DNA microsatellite analysis to identify individuals. We compared predator species detected using molecular tools to those assigned via field observations at each kill. We identified a predator species at 94% of carcasses using molecular methods, while observational methods assigned a predator species to 62.5% of kills. Molecular methods attributed 66.7% of kills to coyote and 33.3% to black bear, while observations assigned 40%, 45%, 10% and 5% to coyote, bear, lynx and fox, respectively. Individual identification was successful at 70% of kills where a predator species was identified. Only one individual was identified at each kill, but some individuals were found at multiple kills. Predator sex was predominantly male. We demonstrate the first large-scale evaluation of predator species, individual and sex identification using molecular techniques to extract DNA from swabs of wild prey carcasses. Our results indicate that kill site swabs (i) can be highly successful in identifying the predator species and individual responsible; and (ii) serve to inform and complement traditional methods. © 2013 John Wiley & Sons Ltd.
Methodological flaws introduce strong bias into molecular analysis of microbial populations.
Krakat, N; Anjum, R; Demirel, B; Schröder, P
2017-02-01
In this study, we report how different cell disruption methods, PCR primers and in silico analyses can seriously bias results from microbial population studies, with consequences for the credibility and reproducibility of the findings. Our results emphasize the pitfalls of commonly used experimental methods that can seriously weaken the interpretation of results. Four different cell lysis methods, three commonly used primer pairs and various computer-based analyses were applied to investigate the microbial diversity of a fermentation sample composed of chicken dung. The fault-prone, but still frequently used, amplified rRNA gene restriction analysis was chosen to identify common weaknesses. In contrast to other studies, we focused on the complete analytical process, from cell disruption to in silico analysis, and identified potential error rates. This identified a wide disagreement of results between applied experimental approaches leading to very different community structures depending on the chosen approach. The interpretation of microbial diversity data remains a challenge. In order to accurately investigate the taxonomic diversity and structure of prokaryotic communities, we suggest a multi-level approach combining DNA-based and DNA-independent techniques. The identified weaknesses of commonly used methods to study microbial diversity can be overcome by a multi-level approach, which produces more reliable data about the fate and behaviour of microbial communities of engineered habitats such as biogas plants, so that the best performance can be ensured. © 2016 The Society for Applied Microbiology.
ERIC Educational Resources Information Center
Calatrava Moreno, María del Carmen; Danowitz, Mary Ann
2016-01-01
The aim of this study was to identify how and why doctoral students do interdisciplinary research. A mixed-methods approach utilising bibliometric analysis of the publications of 195 students identified those who had published interdisciplinary research. This objective measurement of the interdisciplinarity, applying the Rao-Stirling index to Web…
The Use of Gap Analysis to Increase Student Completion Rates at Travelor Adult School
ERIC Educational Resources Information Center
Gil, Blanca Estela
2013-01-01
This project applied the gap analysis problem-solving framework (Clark & Estes, 2008) in order to help develop strategies to increase completion rates at Travelor Adult School. The purpose of the study was to identify whether the knowledge, motivation and organization barriers were contributing to the identified gap. A mixed method approached…
Saldanha, Ian J; Li, Tianjing; Yang, Cui; Ugarte-Gil, Cesar; Rutherford, George W; Dickersin, Kay
2016-02-01
Methods to develop core outcome sets, the minimum outcomes that should be measured in research in a topic area, vary. We applied social network analysis methods to understand outcome co-occurrence patterns in human immunodeficiency virus (HIV)/AIDS systematic reviews and identify outcomes central to the network of outcomes in HIV/AIDS. We examined all Cochrane reviews of HIV/AIDS as of June 2013. We defined a tie as two outcomes (nodes) co-occurring in ≥2 reviews. To identify central outcomes, we used normalized node betweenness centrality (nNBC) (the extent to which connections between other outcomes in a network rely on that outcome as an intermediary). We conducted a subgroup analysis by HIV/AIDS intervention type (i.e., clinical management, biomedical prevention, behavioral prevention, and health services). The 140 included reviews examined 1,140 outcomes, 294 of which were unique. The most central outcome overall was all-cause mortality (nNBC = 23.9). The most central and most frequent outcomes differed overall and within subgroups. For example, "adverse events (specified)" was among the most central but not among the most frequent outcomes, overall. Social network analysis methods are a novel application to identify central outcomes, which provides additional information potentially useful for developing core outcome sets. Copyright © 2016 Elsevier Inc. All rights reserved.
Chiral Analysis of Isopulegol by Fourier Transform Molecular Rotational Spectroscopy
NASA Astrophysics Data System (ADS)
Evangelisti, Luca; Seifert, Nathan A.; Spada, Lorenzo; Pate, Brooks
2016-06-01
Chiral analysis on molecules with multiple chiral centers can be performed using pulsed-jet Fourier transform rotational spectroscopy. This analysis includes quantitative measurement of diastereomer products and, with the three wave mixing methods developed by Patterson, Schnell, and Doyle (Nature 497, 475-477 (2013)), quantitative determination of the enantiomeric excess of each diastereomer. The high resolution features enable to perform the analysis directly on complex samples without the need for chromatographic separation. Isopulegol has been chosen to show the capabilities of Fourier transform rotational spectroscopy for chiral analysis. Broadband rotational spectroscopy produces spectra with signal-to-noise ratio exceeding 1000:1. The ability to identify low-abundance (0.1-1%) diastereomers in the sample will be described. Methods to rapidly identify rotational spectra from isotopologues at natural abundance will be shown and the molecular structures obtained from this analysis will be compared to theory. The role that quantum chemistry calculations play in identifying structural minima and estimating their spectroscopic properties to aid spectral analysis will be described. Finally, the implementation of three wave mixing techniques to measure the enantiomeric excess of each diastereomer and determine the absolute configuration of the enantiomer in excess will be described.
Systematic analysis of molecular mechanisms for HCC metastasis via text mining approach.
Zhen, Cheng; Zhu, Caizhong; Chen, Haoyang; Xiong, Yiru; Tan, Junyuan; Chen, Dong; Li, Jin
2017-02-21
To systematically explore the molecular mechanism for hepatocellular carcinoma (HCC) metastasis and identify regulatory genes with text mining methods. Genes with highest frequencies and significant pathways related to HCC metastasis were listed. A handful of proteins such as EGFR, MDM2, TP53 and APP, were identified as hub nodes in PPI (protein-protein interaction) network. Compared with unique genes for HBV-HCCs, genes particular to HCV-HCCs were less, but may participate in more extensive signaling processes. VEGFA, PI3KCA, MAPK1, MMP9 and other genes may play important roles in multiple phenotypes of metastasis. Genes in abstracts of HCC-metastasis literatures were identified. Word frequency analysis, KEGG pathway and PPI network analysis were performed. Then co-occurrence analysis between genes and metastasis-related phenotypes were carried out. Text mining is effective for revealing potential regulators or pathways, but the purpose of it should be specific, and the combination of various methods will be more useful.
Failure-Modes-And-Effects Analysis Of Software Logic
NASA Technical Reports Server (NTRS)
Garcia, Danny; Hartline, Thomas; Minor, Terry; Statum, David; Vice, David
1996-01-01
Rigorous analysis applied early in design effort. Method of identifying potential inadequacies and modes and effects of failures caused by inadequacies (failure-modes-and-effects analysis or "FMEA" for short) devised for application to software logic.
ERIC Educational Resources Information Center
Kapanadze, Dilek Ünveren
2018-01-01
The aim of this study is to identify the effect of using discourse analysis method on the skills of reading comprehension, textual analysis, creating discourse and use of language. In this study, the authentic test model with pre-test and post-test control group was used in order to determine the difference of academic achievement between…
Wei, Shi-Tong; Sun, Yong-Hua; Zong, Shi-Hua
2017-09-01
The aim of the current study was to identify hub pathways of rheumatoid arthritis (RA) using a novel method based on differential pathway network (DPN) analysis. The present study proposed a DPN where protein‑protein interaction (PPI) network was integrated with pathway‑pathway interactions. Pathway data was obtained from background PPI network and the Reactome pathway database. Subsequently, pathway interactions were extracted from the pathway data by building randomized gene‑gene interactions and a weight value was assigned to each pathway interaction using Spearman correlation coefficient (SCC) to identify differential pathway interactions. Differential pathway interactions were visualized using Cytoscape to construct a DPN. Topological analysis was conducted to identify hub pathways that possessed the top 5% degree distribution of DPN. Modules of DPN were mined according to ClusterONE. A total of 855 pathways were selected to build pathway interactions. By filtrating pathway interactions of weight values >0.7, a DPN with 312 nodes and 791 edges was obtained. Topological degree analysis revealed 15 hub pathways, such as heparan sulfate/heparin‑glycosaminoglycan (HS‑GAG) degradation, HS‑GAG metabolism and keratan sulfate degradation for RA based on DPN. Furthermore, hub pathways were also important in modules, which validated the significance of hub pathways. In conclusion, the proposed method is a computationally efficient way to identify hub pathways of RA, which identified 15 hub pathways that may be potential biomarkers and provide insight to future investigation and treatment of RA.
Rapid identification of oral Actinomyces species cultivated from subgingival biofilm by MALDI-TOF-MS
Stingu, Catalina S.; Borgmann, Toralf; Rodloff, Arne C.; Vielkind, Paul; Jentsch, Holger; Schellenberger, Wolfgang; Eschrich, Klaus
2015-01-01
Background Actinomyces are a common part of the residential flora of the human intestinal tract, genitourinary system and skin. Isolation and identification of Actinomyces by conventional methods is often difficult and time consuming. In recent years, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) has become a rapid and simple method to identify bacteria. Objective The present study evaluated a new in-house algorithm using MALDI-TOF-MS for rapid identification of different species of oral Actinomyces cultivated from subgingival biofilm. Design Eleven reference strains and 674 clinical strains were used in this study. All the strains were preliminarily identified using biochemical methods and then subjected to MALDI-TOF-MS analysis using both similarity-based analysis and classification methods (support vector machine [SVM]). The genotype of the reference strains and of 232 clinical strains was identified by sequence analysis of the 16S ribosomal RNA (rRNA). Results The sequence analysis of the 16S rRNA gene of all references strains confirmed their previous identification. The MALDI-TOF-MS spectra obtained from the reference strains and the other clinical strains undoubtedly identified as Actinomyces by 16S rRNA sequencing were used to create the mass spectra reference database. Already a visual inspection of the mass spectra of different species reveals both similarities and differences. However, the differences between them are not large enough to allow a reliable differentiation by similarity analysis. Therefore, classification methods were applied as an alternative approach for differentiation and identification of Actinomyces at the species level. A cross-validation of the reference database representing 14 Actinomyces species yielded correct results for all species which were represented by more than two strains in the database. Conclusions Our results suggest that a combination of MALDI-TOF-MS with powerful classification algorithms, such as SVMs, provide a useful tool for the differentiation and identification of oral Actinomyces. PMID:25597306
Developing and Evaluating the HRM Technique for Identifying Cytochrome P450 2D6 Polymorphisms.
Lu, Hsiu-Chin; Chang, Ya-Sian; Chang, Chun-Chi; Lin, Ching-Hsiung; Chang, Jan-Gowth
2015-05-01
Cytochrome P450 2D6 is one of the important enzymes involved in the metabolism of many widely used drugs. Genetic polymorphisms of CYP2D6 can affect its activity. Therefore, an efficient method for identifying CYP2D6 polymorphisms is clinically important. We developed a high-resolution melting (HRM) analysis to investigate CYP2D6 polymorphisms. Genomic DNA was extracted from peripheral blood samples from 71 healthy individuals. All nine exons of the CYP2D6 gene were sequenced before screening by HRM analysis. This method can detect the most genotypes (*1, *2, *4, *10, *14, *21 *39, and *41) of CYP2D6 in Chinese. All samples were successfully genotyped. The four most common mutant CYP2D6 alleles (*1, *2, *10, and *41) can be genotyped. The single nucleotides polymorphism (SNP) frequencies of 100C > T (rs1065852), 1039C > T (rs1081003), 1661G > C (rs1058164), 2663G > A (rs28371722), 2850C > T (rs16947), 2988G > A (rs28371725), 3181A > G, and 4180G > C (rs1135840) were 58%, 61%, 73%, 1%, 13%, 3%, 1%, 73%, respectively. We identified 100% of all heterozygotes without any errors. The two homozygous genotypes (1661G > C and 4180G > C) can be distinguished by mixing with a known genotype sample to generate an artificial heterozygote for HRM analysis. Therefore, all samples could be identified using our HRM method, and the results of HRM analysis are identical to those obtained by sequencing. Our method achieved 100% sensitivity, specificity, positive prediction value and negative prediction value. HRM analysis is a nongel resolution method that is faster and less expensive than direct sequencing. Our study shows that it is an efficient tool for typing CYP2D6 polymorphisms. © 2014 Wiley Periodicals, Inc.
Er, Tze-Kiong; Kan, Tzu-Min; Su, Yu-Fa; Liu, Ta-Chih; Chang, Jan-Gowth; Hung, Shih-Ya; Jong, Yuh-Jyh
2012-11-12
Spinal muscular atrophy (SMA) is a neurodegenerative disease with the leading genetic cause of infant mortality. More than 95% of patients with SMA have a homozygous disruption in the survival motor neuron1 (SMN1) gene, caused by mutation, deletion, or rearrangement. Recently, treatment in humans in the immediate postnatal period, prior to the development of weakness or very early in the course of the disease, may be effective. Therefore, our objective was to establish a feasible method for SMA screening. High-resolution melting (HRM) analysis is rapidly becoming the most important mutation-scanning methodology that allows mutation scanning and genotyping without the need for costly labeled oligonucleotides. In the current study, we aim to develop a method for identifying the substitution of single nucleotide in SMN1 exon 7 (c.840C>T) by HRM analysis. Genomic DNA was extracted from peripheral blood samples and dried blood spots obtained from 30 patients with SMA and 30 normal individuals. All results were previously confirmed by denaturing high-performance liquid chromatography (DHPLC). In order to identify the substitution of single nucleotide in SMN1 exon 7 (c.840C>T) by HRM analysis, a primer set was used in HRM analysis. At first, we failed to identify the substitution of single nucleotide in SMN1 exon 7 (c.840C>T) by HRM analysis because the homozygous CC and homozygous TT cannot be distinguished by HRM analysis. Therefore, all samples were mixed with a known SMN1/SMN2 copy number (SMN1/SMN2=0:3), which we may call driver. This strategy is used to differentiate between homozygous CC and homozygous TT. After mixing with driver, the melting profile of homozygous CC becomes heteroduplex; however, the homozygous TT remains the same in the normalized and temperature-shifted difference plots. HRM analysis can be successfully applied to screen SMA via DNA obtained from whole blood and dried blood spots. We strongly believe that HRM analysis, a high-throughput method, could be used for identifying affected infants prior to the presentation of clinical symptoms in future. Copyright © 2012 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Willse, Alan R.
The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.
Clavibacter michiganensis subsp. phaseoli subsp. nov., pathogenic in bean.
González, Ana J; Trapiello, Estefanía
2014-05-01
A yellow Gram-reaction-positive bacterium isolated from bean seeds (Phaseolus vulgaris L.) was identified as Clavibacter michiganensis by 16S rRNA gene sequencing. Molecular methods were employed in order to identify the subspecies. Such methods included the amplification of specific sequences by PCR, 16S amplified rDNA restriction analysis (ARDRA), RFLP and multilocus sequence analysis as well as the analysis of biochemical and phenotypic traits including API 50CH and API ZYM results. The results showed that strain LPPA 982T did not represent any known subspecies of C. michiganensis. Pathogenicity tests revealed that the strain is a bean pathogen causing a newly identified bacterial disease that we name bacterial bean leaf yellowing. On the basis of these results, strain LPPA 982T is regarded as representing a novel subspecies for which the name Clavibacter michiganensis subsp. phaseoli subsp. nov. is proposed. The type strain is LPPA 982T (=CECT 8144T=LMG 27667T).
Protein Sectors: Statistical Coupling Analysis versus Conservation
Teşileanu, Tiberiu; Colwell, Lucy J.; Leibler, Stanislas
2015-01-01
Statistical coupling analysis (SCA) is a method for analyzing multiple sequence alignments that was used to identify groups of coevolving residues termed “sectors”. The method applies spectral analysis to a matrix obtained by combining correlation information with sequence conservation. It has been asserted that the protein sectors identified by SCA are functionally significant, with different sectors controlling different biochemical properties of the protein. Here we reconsider the available experimental data and note that it involves almost exclusively proteins with a single sector. We show that in this case sequence conservation is the dominating factor in SCA, and can alone be used to make statistically equivalent functional predictions. Therefore, we suggest shifting the experimental focus to proteins for which SCA identifies several sectors. Correlations in protein alignments, which have been shown to be informative in a number of independent studies, would then be less dominated by sequence conservation. PMID:25723535
Moral deliberation and nursing ethics cases: elements of a methodological proposal.
Schneider, Dulcinéia Ghizoni; Ramos, Flávia Regina Souza
2012-11-01
A qualitative study with an exploratory, descriptive and documentary design that was conducted with the objective of identifying the elements to constitute a method for the analysis of accusations of and proceedings for professional ethics infringements. The method is based on underlying elements identified inductively during analysis of professional ethics hearings judged by and filed in the archives of the Regional Nursing Board of Santa Catarina, Brazil, between 1999 and 2007. The strategies developed were based on the results of an analysis of the findings of fact (occurrences/infractions, causes and outcomes) contained in the records of 128 professional ethics hearings and on the structural elements (statements, rules and practices) identified in five example professional ethics cases. The strategies suggested for evaluating accusations of ethics infringements and the procedures involved in deliberating on ethics hearings constitute a generic proposal that will require adaptation to the context of specific professional ethics accusations.
A SVM-based quantitative fMRI method for resting-state functional network detection.
Song, Xiaomu; Chen, Nan-kuei
2014-09-01
Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.
IRB Process Improvements: A Machine Learning Analysis.
Shoenbill, Kimberly; Song, Yiqiang; Cobb, Nichelle L; Drezner, Marc K; Mendonca, Eneida A
2017-06-01
Clinical research involving humans is critically important, but it is a lengthy and expensive process. Most studies require institutional review board (IRB) approval. Our objective is to identify predictors of delays or accelerations in the IRB review process and apply this knowledge to inform process change in an effort to improve IRB efficiency, transparency, consistency and communication. We analyzed timelines of protocol submissions to determine protocol or IRB characteristics associated with different processing times. Our evaluation included single variable analysis to identify significant predictors of IRB processing time and machine learning methods to predict processing times through the IRB review system. Based on initial identified predictors, changes to IRB workflow and staffing procedures were instituted and we repeated our analysis. Our analysis identified several predictors of delays in the IRB review process including type of IRB review to be conducted, whether a protocol falls under Veteran's Administration purview and specific staff in charge of a protocol's review. We have identified several predictors of delays in IRB protocol review processing times using statistical and machine learning methods. Application of this knowledge to process improvement efforts in two IRBs has led to increased efficiency in protocol review. The workflow and system enhancements that are being made support our four-part goal of improving IRB efficiency, consistency, transparency, and communication.
USDA-ARS?s Scientific Manuscript database
Wort beta-glucan concentration is a critical malting quality parameter used to identify and avoid potential brewhouse filtration problems. ASBC method Wort-18 is widely used in malt analysis laboratories and brewhouses to measure wort beta-glucan levels. However, the chemistry underlying the method...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright
Finding and identifying Cryptography is a growing concern in the malware analysis community. In this paper, a heuristic method for determining the likelihood that a given function contains a cryptographic algorithm is discussed and the results of applying this method in various environments is shown. The algorithm is based on frequency analysis of opcodes that make up each function within a binary.
ERIC Educational Resources Information Center
Schmidt, Jonathan D.; Drasgow, Erik; Halle, James W.; Martin, Christian A.; Bliss, Sacha A.
2014-01-01
Discrete-trial functional analysis (DTFA) is an experimental method for determining the variables maintaining problem behavior in the context of natural routines. Functional communication training (FCT) is an effective method for replacing problem behavior, once identified, with a functionally equivalent response. We implemented these procedures…
A round robin approach to the analysis of bisphenol a (BPA) in human blood samples
2014-01-01
Background Human exposure to bisphenol A (BPA) is ubiquitous, yet there are concerns about whether BPA can be measured in human blood. This Round Robin was designed to address this concern through three goals: 1) to identify collection materials, reagents and detection apparatuses that do not contribute BPA to serum; 2) to identify sensitive and precise methods to accurately measure unconjugated BPA (uBPA) and BPA-glucuronide (BPA-G), a metabolite, in serum; and 3) to evaluate whether inadvertent hydrolysis of BPA-G occurs during sample handling and processing. Methods Four laboratories participated in this Round Robin. Laboratories screened materials to identify BPA contamination in collection and analysis materials. Serum was spiked with concentrations of uBPA and/or BPA-G ranging from 0.09-19.5 (uBPA) and 0.5-32 (BPA-G) ng/mL. Additional samples were preserved unspiked as ‘environmental’ samples. Blinded samples were provided to laboratories that used LC/MSMS to simultaneously quantify uBPA and BPA-G. To determine whether inadvertent hydrolysis of BPA metabolites occurred, samples spiked with only BPA-G were analyzed for the presence of uBPA. Finally, three laboratories compared direct and indirect methods of quantifying BPA-G. Results We identified collection materials and reagents that did not introduce BPA contamination. In the blinded spiked sample analysis, all laboratories were able to distinguish low from high values of uBPA and BPA-G, for the whole spiked sample range and for those samples spiked with the three lowest concentrations (0.5-3.1 ng/ml). By completion of the Round Robin, three laboratories had verified methods for the analysis of uBPA and two verified for the analysis of BPA-G (verification determined by: 4 of 5 samples within 20% of spiked concentrations). In the analysis of BPA-G only spiked samples, all laboratories reported BPA-G was the majority of BPA detected (92.2 – 100%). Finally, laboratories were more likely to be verified using direct methods than indirect ones using enzymatic hydrolysis. Conclusions Sensitive and accurate methods for the direct quantification of uBPA and BPA-G were developed in multiple laboratories and can be used for the analysis of human serum samples. BPA contamination can be controlled during sample collection and inadvertent hydrolysis of BPA conjugates can be avoided during sample handling. PMID:24690217
Caseload management methods for use within district nursing teams: a literature review.
Roberson, Carole
2016-05-01
Effective and efficient caseload management requires extensive skills to ensure that patients receive the right care by the right person at the right time. District nursing caseloads are continually increasing in size and complexity, which requires specialist district nursing knowledge and skills. This article reviews the literature related to caseload management with the aim of identifying the most effective method for district nursing teams. The findings from this review are that there are different styles and methods of caseload management. The literature review was unable to identify a single validated tool or method, but identified themes for implementing effective caseload management, specifically caseload analysis; workload measurement; work allocation; service and practice development and workforce planning. This review also identified some areas for further research.
The EIPeptiDi tool: enhancing peptide discovery in ICAT-based LC MS/MS experiments.
Cannataro, Mario; Cuda, Giovanni; Gaspari, Marco; Greco, Sergio; Tradigo, Giuseppe; Veltri, Pierangelo
2007-07-15
Isotope-coded affinity tags (ICAT) is a method for quantitative proteomics based on differential isotopic labeling, sample digestion and mass spectrometry (MS). The method allows the identification and relative quantification of proteins present in two samples and consists of the following phases. First, cysteine residues are either labeled using the ICAT Light or ICAT Heavy reagent (having identical chemical properties but different masses). Then, after whole sample digestion, the labeled peptides are captured selectively using the biotin tag contained in both ICAT reagents. Finally, the simplified peptide mixture is analyzed by nanoscale liquid chromatography-tandem mass spectrometry (LC-MS/MS). Nevertheless, the ICAT LC-MS/MS method still suffers from insufficient sample-to-sample reproducibility on peptide identification. In particular, the number and the type of peptides identified in different experiments can vary considerably and, thus, the statistical (comparative) analysis of sample sets is very challenging. Low information overlap at the peptide and, consequently, at the protein level, is very detrimental in situations where the number of samples to be analyzed is high. We designed a method for improving the data processing and peptide identification in sample sets subjected to ICAT labeling and LC-MS/MS analysis, based on cross validating MS/MS results. Such a method has been implemented in a tool, called EIPeptiDi, which boosts the ICAT data analysis software improving peptide identification throughout the input data set. Heavy/Light (H/L) pairs quantified but not identified by the MS/MS routine, are assigned to peptide sequences identified in other samples, by using similarity criteria based on chromatographic retention time and Heavy/Light mass attributes. EIPeptiDi significantly improves the number of identified peptides per sample, proving that the proposed method has a considerable impact on the protein identification process and, consequently, on the amount of potentially critical information in clinical studies. The EIPeptiDi tool is available at http://bioingegneria.unicz.it/~veltri/projects/eipeptidi/ with a demo data set. EIPeptiDi significantly increases the number of peptides identified and quantified in analyzed samples, thus reducing the number of unassigned H/L pairs and allowing a better comparative analysis of sample data sets.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
System review: a method for investigating medical errors in healthcare settings.
Alexander, G L; Stone, T T
2000-01-01
System analysis is a process of evaluating objectives, resources, structure, and design of businesses. System analysis can be used by leaders to collaboratively identify breakthrough opportunities to improve system processes. In healthcare systems, system analysis can be used to review medical errors (system occurrences) that may place patients at risk for injury, disability, and/or death. This study utilizes a case management approach to identify medical errors. Utilizing an interdisciplinary approach, a System Review Team was developed to identify trends in system occurrences, facilitate communication, and enhance the quality of patient care by reducing medical errors.
2013-01-01
Background Triglyceride deposit cardiomyovasculopathy (TGCV) is a rare disease, characterized by the massive accumulation of triglyceride (TG) in multiple tissues, especially skeletal muscle, heart muscle and the coronary artery. TGCV is caused by mutation of adipose triglyceride lipase, which is an essential molecule for the hydrolysis of TG. TGCV is at high risk for skeletal myopathy and heart dysfunction, and therefore premature death. Development of therapeutic methods for TGCV is highly desirable. This study aims to discover specific molecules responsible for TGCV pathogenesis. Methods To identify differentially expressed proteins in TGCV patient cells, the stable isotope labeling with amino acids in cell culture (SILAC) method coupled with LC-MS/MS was performed using skin fibroblast cells derived from two TGCV patients and three healthy volunteers. Altered protein expression in TGCV cells was confirmed using the selected reaction monitoring (SRM) method. Microarray-based transcriptome analysis was simultaneously performed to identify changes in gene expression in TGCV cells. Results Using SILAC proteomics, 4033 proteins were quantified, 53 of which showed significantly altered expression in both TGCV patient cells. Twenty altered proteins were chosen and confirmed using SRM. SRM analysis successfully quantified 14 proteins, 13 of which showed the same trend as SILAC proteomics. The altered protein expression data set was used in Ingenuity Pathway Analysis (IPA), and significant networks were identified. Several of these proteins have been previously implicated in lipid metabolism, while others represent new therapeutic targets or markers for TGCV. Microarray analysis quantified 20743 transcripts, and 252 genes showed significantly altered expression in both TGCV patient cells. Ten altered genes were chosen, 9 of which were successfully confirmed using quantitative RT-PCR. Biological networks of altered genes were analyzed using an IPA search. Conclusions We performed the SILAC- and SRM-based identification-through-confirmation study using skin fibroblast cells derived from TGCV patients, and first identified altered proteins specific for TGCV. Microarray analysis also identified changes in gene expression. The functional networks of the altered proteins and genes are discussed. Our findings will be exploited to elucidate the pathogenesis of TGCV and discover clinically relevant molecules for TGCV in the near future. PMID:24360150
Richardson, Rodney T.; Lin, Chia-Hua; Sponsler, Douglas B.; Quijia, Juan O.; Goodell, Karen; Johnson, Reed M.
2015-01-01
• Premise of the study: Melissopalynology, the identification of bee-collected pollen, provides insight into the flowers exploited by foraging bees. Information provided by melissopalynology could guide floral enrichment efforts aimed at supporting pollinators, but it has rarely been used because traditional methods of pollen identification are laborious and require expert knowledge. We approach melissopalynology in a novel way, employing a molecular method to study the pollen foraging of honey bees (Apis mellifera) in a landscape dominated by field crops, and compare these results to those obtained by microscopic melissopalynology. • Methods: Pollen was collected from honey bee colonies in Madison County, Ohio, USA, during a two-week period in midspring and identified using microscopic methods and ITS2 metabarcoding. • Results: Metabarcoding identified 19 plant families and exhibited sensitivity for identifying the taxa present in large and diverse pollen samples relative to microscopy, which identified eight families. The bulk of pollen collected by honey bees was from trees (Sapindaceae, Oleaceae, and Rosaceae), although dandelion (Taraxacum officinale) and mustard (Brassicaceae) pollen were also abundant. • Discussion: For quantitative analysis of pollen, using both metabarcoding and microscopic identification is superior to either individual method. For qualitative analysis, ITS2 metabarcoding is superior, providing heightened sensitivity and genus-level resolution. PMID:25606352
Morphological resonances for multicomponent immunoassays
NASA Astrophysics Data System (ADS)
Whitten, W. B.; Shapiro, M. J.; Ramsey, J. M.; Bronk, B. V.
1995-06-01
An immunoassay technique capable of detecting and identifying a number of species of microorganisms in a single analysis is described. The method uses optical-resonance size discrimination of microspheres to identify antibodies to which stained microorganisms are bound.
Structural Identifiability of Dynamic Systems Biology Models
Villaverde, Alejandro F.
2016-01-01
A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726
Financing Alternatives Comparison Tool
FACT is a financial analysis tool that helps identify the most cost-effective method to fund a wastewater or drinking water management project. It produces a comprehensive analysis that compares various financing options.
Mine safety assessment using gray relational analysis and bow tie model
2018-01-01
Mine safety assessment is a precondition for ensuring orderly and safety in production. The main purpose of this study was to prevent mine accidents more effectively by proposing a composite risk analysis model. First, the weights of the assessment indicators were determined by the revised integrated weight method, in which the objective weights were determined by a variation coefficient method and the subjective weights determined by the Delphi method. A new formula was then adopted to calculate the integrated weights based on the subjective and objective weights. Second, after the assessment indicator weights were determined, gray relational analysis was used to evaluate the safety of mine enterprises. Mine enterprise safety was ranked according to the gray relational degree, and weak links of mine safety practices identified based on gray relational analysis. Third, to validate the revised integrated weight method adopted in the process of gray relational analysis, the fuzzy evaluation method was used to the safety assessment of mine enterprises. Fourth, for first time, bow tie model was adopted to identify the causes and consequences of weak links and allow corresponding safety measures to be taken to guarantee the mine’s safe production. A case study of mine safety assessment was presented to demonstrate the effectiveness and rationality of the proposed composite risk analysis model, which can be applied to other related industries for safety evaluation. PMID:29561875
Identification and analysis of damaged or porous hair.
Hill, Virginia; Loni, Elvan; Cairns, Thomas; Sommer, Jonathan; Schaffer, Michael
2014-06-01
Cosmetic hair treatments have been referred to as 'the pitfall' of hair analysis. However, most cosmetic treatments, when applied to the hair as instructed by the product vendors, do not interfere with analysis, provided such treatments can be identified by the laboratory and the samples analyzed and reported appropriately for the condition of the hair. This paper provides methods for identifying damaged or porous hair samples using digestion rates of hair in dithiothreitol with and without proteinase K, as well as a protein measurement method applied to dithiothreitol-digested samples. Extremely damaged samples may be unsuitable for analysis. Aggressive and extended aqueous washing of hair samples is a proven method for removing or identifying externally derived drug contamination of hair. In addition to this wash procedure, we have developed an alternative wash procedure using 90% ethanol for washing damaged or porous hair. The procedure, like the aqueous wash procedure, requires analysis of the last of five washes to evaluate the effectiveness of the washing procedure. This evaluation, termed the Wash Criterion, is derived from studies of the kinetics of washing of hair samples that have been experimentally contaminated and of hair from drug users. To study decontamination methods, in vitro contaminated drug-negative hair samples were washed by both the aqueous buffer method and a 90% ethanol method. Analysis of cocaine and methamphetamine was by liquid chromatography-tandem mass spectrometry (LC/MS/MS). Porous hair samples from drug users, when washed in 90% ethanol, pass the wash criterion although they may fail the aqueous wash criterion. Those samples that fail both the ethanolic and aqueous wash criterion are not reported as positive for ingestion. Similar ratios of the metabolite amphetamine relative to methamphetamine in the last wash and the hair is an additional criterion for assessing contamination vs. ingestion of methamphetamine. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Korneva, N. N.; Mogilevskii, M. M.; Nazarov, V. N.
2016-05-01
Traditional methods of time series analysis of satellite ionospheric measurements have some limitations and disadvantages that are mainly associated with the complex nonstationary signal structure. In this paper, the possibility of identifying and studying the temporal characteristics of signals via visual analysis is considered. The proposed approach is illustrated by the example of the visual analysis of wave measurements on the DEMETER microsatellite during its passage over the HAARP facility.
Hawkes, Corinna
2009-07-01
The mapping and analysis of supply chains is a technique increasingly used to address problems in the food system. Yet such supply chain management has not yet been applied as a means of encouraging healthier diets. Moreover, most policies recommended to promote healthy eating focus on the consumer end of the chain. This article proposes a consumption-oriented food supply chain analysis to identify the changes needed in the food supply chain to create a healthier food environment, measured in terms of food availability, prices, and marketing. Along with established forms of supply chain analysis, the method is informed by a historical overview of how food supply chains have changed over time. The method posits that the actors and actions in the chain are affected by organizational, financial, technological, and policy incentives and disincentives, which can in turn be levered for change. It presents a preliminary example of the supply of Coca-Cola beverages into school vending machines and identifies further potential applications. These include fruit and vegetable supply chains, local food chains, supply chains for health-promoting versions of food products, and identifying financial incentives in supply chains for healthier eating.
Hawkes, Corinna
2009-01-01
The mapping and analysis of supply chains is a technique increasingly used to address problems in the food system. Yet such supply chain management has not yet been applied as a means of encouraging healthier diets. Moreover, most policies recommended to promote healthy eating focus on the consumer end of the chain. This article proposes a consumption-oriented food supply chain analysis to identify the changes needed in the food supply chain to create a healthier food environment, measured in terms of food availability, prices, and marketing. Along with established forms of supply chain analysis, the method is informed by a historical overview of how food supply chains have changed over time. The method posits that the actors and actions in the chain are affected by organizational, financial, technological, and policy incentives and disincentives, which can in turn be levered for change. It presents a preliminary example of the supply of Coca-Cola beverages into school vending machines and identifies further potential applications. These include fruit and vegetable supply chains, local food chains, supply chains for health-promoting versions of food products, and identifying financial incentives in supply chains for healthier eating. PMID:23144674
Dynamic characterization of high damping viscoelastic materials from vibration test data
NASA Astrophysics Data System (ADS)
Martinez-Agirre, Manex; Elejabarrieta, María Jesús
2011-08-01
The numerical analysis and design of structural systems involving viscoelastic damping materials require knowledge of material properties and proper mathematical models. A new inverse method for the dynamic characterization of high damping and strong frequency-dependent viscoelastic materials from vibration test data measured by forced vibration tests with resonance is presented. Classical material parameter extraction methods are reviewed; their accuracy for characterizing high damping materials is discussed; and the bases of the new analysis method are detailed. The proposed inverse method minimizes the residue between the experimental and theoretical dynamic response at certain discrete frequencies selected by the user in order to identify the parameters of the material constitutive model. Thus, the material properties are identified in the whole bandwidth under study and not just at resonances. Moreover, the use of control frequencies makes the method insensitive to experimental noise and the efficiency is notably enhanced. Therefore, the number of tests required is drastically reduced and the overall process is carried out faster and more accurately. The effectiveness of the proposed method is demonstrated with the characterization of a CLD (constrained layer damping) cantilever beam. First, the elastic properties of the constraining layers are identified from the dynamic response of a metallic cantilever beam. Then, the viscoelastic properties of the core, represented by a four-parameter fractional derivative model, are identified from the dynamic response of a CLD cantilever beam.
Methods for extracting social network data from chatroom logs
NASA Astrophysics Data System (ADS)
Osesina, O. Isaac; McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.; Bartley, Cecilia; Tudoreanu, M. Eduard
2012-06-01
Identifying social network (SN) links within computer-mediated communication platforms without explicit relations among users poses challenges to researchers. Our research aims to extract SN links in internet chat with multiple users engaging in synchronous overlapping conversations all displayed in a single stream. We approached this problem using three methods which build on previous research. Response-time analysis builds on temporal proximity of chat messages; word context usage builds on keywords analysis and direct addressing which infers links by identifying the intended message recipient from the screen name (nickname) referenced in the message [1]. Our analysis of word usage within the chat stream also provides contexts for the extracted SN links. To test the capability of our methods, we used publicly available data from Internet Relay Chat (IRC), a real-time computer-mediated communication (CMC) tool used by millions of people around the world. The extraction performances of individual methods and their hybrids were assessed relative to a ground truth (determined a priori via manual scoring).
Hofmann, Bjørn
2017-04-01
To develop a method for exposing and elucidating ethical issues with human cognitive enhancement (HCE). The intended use of the method is to support and facilitate open and transparent deliberation and decision making with respect to this emerging technology with great potential formative implications for individuals and society. Literature search to identify relevant approaches. Conventional content analysis of the identified papers and methods in order to assess their suitability for assessing HCE according to four selection criteria. Method development. Amendment after pilot testing on smart-glasses. Based on three existing approaches in health technology assessment a method for exposing and elucidating ethical issues in the assessment of HCE technologies was developed. Based on a pilot test for smart-glasses, the method was amended. The method consists of six steps and a guiding list of 43 questions. A method for exposing and elucidating ethical issues in the assessment of HCE was developed. The method provides the ground work for context specific ethical assessment and analysis. Widespread use, amendments, and further developments of the method are encouraged.
Costello, Tracy J; Falk, Catherine T; Ye, Kenny Q
2003-01-01
The Framingham Heart Study data, as well as a related simulated data set, were generously provided to the participants of the Genetic Analysis Workshop 13 in order that newly developed and emerging statistical methodologies could be tested on that well-characterized data set. The impetus driving the development of novel methods is to elucidate the contributions of genes, environment, and interactions between and among them, as well as to allow comparison between and validation of methods. The seven papers that comprise this group used data-mining methodologies (tree-based methods, neural networks, discriminant analysis, and Bayesian variable selection) in an attempt to identify the underlying genetics of cardiovascular disease and related traits in the presence of environmental and genetic covariates. Data-mining strategies are gaining popularity because they are extremely flexible and may have greater efficiency and potential in identifying the factors involved in complex disorders. While the methods grouped together here constitute a diverse collection, some papers asked similar questions with very different methods, while others used the same underlying methodology to ask very different questions. This paper briefly describes the data-mining methodologies applied to the Genetic Analysis Workshop 13 data sets and the results of those investigations. Copyright 2003 Wiley-Liss, Inc.
Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W
2011-01-01
Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.
EHR Improvement Using Incident Reports.
Teame, Tesfay; Stålhane, Tor; Nytrø, Øystein
2017-01-01
This paper discusses reactive improvement of clinical software using methods for incident analysis. We used the "Five Whys" method because we had only descriptive data and depended on a domain expert for the analysis. The analysis showed that there are two major root causes for EHR software failure, and that they are related to human and organizational errors. A main identified improvement is allocating more resources to system maintenance and user training.
Du, Yushen; Wu, Nicholas C; Jiang, Lin; Zhang, Tianhao; Gong, Danyang; Shu, Sara; Wu, Ting-Ting; Sun, Ren
2016-11-01
Identification and annotation of functional residues are fundamental questions in protein sequence analysis. Sequence and structure conservation provides valuable information to tackle these questions. It is, however, limited by the incomplete sampling of sequence space in natural evolution. Moreover, proteins often have multiple functions, with overlapping sequences that present challenges to accurate annotation of the exact functions of individual residues by conservation-based methods. Using the influenza A virus PB1 protein as an example, we developed a method to systematically identify and annotate functional residues. We used saturation mutagenesis and high-throughput sequencing to measure the replication capacity of single nucleotide mutations across the entire PB1 protein. After predicting protein stability upon mutations, we identified functional PB1 residues that are essential for viral replication. To further annotate the functional residues important to the canonical or noncanonical functions of viral RNA-dependent RNA polymerase (vRdRp), we performed a homologous-structure analysis with 16 different vRdRp structures. We achieved high sensitivity in annotating the known canonical polymerase functional residues. Moreover, we identified a cluster of noncanonical functional residues located in the loop region of the PB1 β-ribbon. We further demonstrated that these residues were important for PB1 protein nuclear import through the interaction with Ran-binding protein 5. In summary, we developed a systematic and sensitive method to identify and annotate functional residues that are not restrained by sequence conservation. Importantly, this method is generally applicable to other proteins about which homologous-structure information is available. To fully comprehend the diverse functions of a protein, it is essential to understand the functionality of individual residues. Current methods are highly dependent on evolutionary sequence conservation, which is usually limited by sampling size. Sequence conservation-based methods are further confounded by structural constraints and multifunctionality of proteins. Here we present a method that can systematically identify and annotate functional residues of a given protein. We used a high-throughput functional profiling platform to identify essential residues. Coupling it with homologous-structure comparison, we were able to annotate multiple functions of proteins. We demonstrated the method with the PB1 protein of influenza A virus and identified novel functional residues in addition to its canonical function as an RNA-dependent RNA polymerase. Not limited to virology, this method is generally applicable to other proteins that can be functionally selected and about which homologous-structure information is available. Copyright © 2016 Du et al.
Zhang, Jia-Yu; Zhang, Qian; Li, Ning; Wang, Zi-Jian; Lu, Jian-Qiu; Qiao, Yan-Jiang
2013-01-30
A method of modified diagnostic fragment-ion-based extension strategy (DFIBES) coupled to DFIs (diagnostic fragmentation ions) intensity analysis was successfully established to simultaneously screen and identify the chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ) by HPLC-ESI-MS(n). DFIs, such as m/z 191 [quinic acid-H](-), m/z 179 [caffeic acid-H](-) and m/z 173 [quinic acid-H-H2O](-) were determined or proposed from the fragmentation patterns analysis of corresponding reference substances for every chemical family of CGAs. A "structure extension" method was then proposed based on the well-demonstrated fragmentation patterns and was successively applied into the rapid screening of CGAs in FLJ. Considering that substitution isomerism is a common phenomenon, a full ESI-MS(n) fragmentation analysis according to the intensity of DFIs has been performed to identify the CGA isomers. Based on the DFIs and intensity analysis, 41 peaks attributed to CGAs including 4 caffeoylquinic acids (CQA), 7 CQA glycosides, 6 dicaffeoylquinic acids (DiCQA), 10 DiCQA glycosides, 1 tricaffeoylquinic acids (TriCQA), 4p-coumaroylquinic acids (pCoQA), 3 feruloylquinic acids (FQA) and 6 caffeoylferuloylquinic acids (CFQA) were identified preliminarily in a 65-min chromatographic run. It was the first time to systematically report the presence of CGAs in FLJ, especially for CQA glycosides, DiCQA glycosides, TriCQA, pCoQA and CFQA. All the results indicated that the method of developed DFIBES coupled to DFIs analysis was feasible, reliable and universal for screening and identifying the constituents with the same carbon skeletons especially the isomeric compounds from the complex extract of TCMs. Copyright © 2012 Elsevier B.V. All rights reserved.
Marino, S R; Lin, S; Maiers, M; Haagenson, M; Spellman, S; Klein, J P; Binkowski, T A; Lee, S J; van Besien, K
2012-02-01
The identification of important amino acid substitutions associated with low survival in hematopoietic cell transplantation (HCT) is hampered by the large number of observed substitutions compared with the small number of patients available for analysis. Random forest analysis is designed to address these limitations. We studied 2107 HCT recipients with good or intermediate risk hematological malignancies to identify HLA class I amino acid substitutions associated with reduced survival at day 100 post transplant. Random forest analysis and traditional univariate and multivariate analyses were used. Random forest analysis identified amino acid substitutions in 33 positions that were associated with reduced 100 day survival, including HLA-A 9, 43, 62, 63, 76, 77, 95, 97, 114, 116, 152, 156, 166 and 167; HLA-B 97, 109, 116 and 156; and HLA-C 6, 9, 11, 14, 21, 66, 77, 80, 95, 97, 99, 116, 156, 163 and 173. In all 13 had been previously reported by other investigators using classical biostatistical approaches. Using the same data set, traditional multivariate logistic regression identified only five amino acid substitutions associated with lower day 100 survival. Random forest analysis is a novel statistical methodology for analysis of HLA mismatching and outcome studies, capable of identifying important amino acid substitutions missed by other methods.
Proteomic methods for analysis of S-nitrosation⋄
Kettenhofen, Nicholas; Broniowska, Katarzyna; Keszler, Agnes; Zhang, Yanhong; Hogg, Neil
2007-01-01
This review discusses proteomic methods to detect and identify S-nitrosated proteins. Protein S-nitrosation, the post-translational modification of thiol residues to form S-nitrosothiols, has been suggested to be a mechanism of cellular redox signaling by which nitric oxide can alter cellular function through modification of protein thiol residues. It has become apparent that methods that will detect and identify low levels of S-nitrosated protein in complex protein mixtures are required in order to fully appreciate the range, extent and selectivity of this modification in both physiological and pathological conditions. While many advances have been made in the detection of either total cellular S-nitrosation or individual S-nitrosothiols, proteomic methods for the detection of S-nitrosation are in relative infancy. This review will discuss the major methods that have been used for the proteomic analysis of protein S-nitrosation and discuss the pros and cons of this methodology. PMID:17360249
Statistical Coupling Analysis-Guided Library Design for the Discovery of Mutant Luciferases.
Liu, Mira D; Warner, Elliot A; Morrissey, Charlotte E; Fick, Caitlyn W; Wu, Taia S; Ornelas, Marya Y; Ochoa, Gabriela V; Zhang, Brendan S; Rathbun, Colin M; Porterfield, William B; Prescher, Jennifer A; Leconte, Aaron M
2018-02-06
Directed evolution has proven to be an invaluable tool for protein engineering; however, there is still a need for developing new approaches to continue to improve the efficiency and efficacy of these methods. Here, we demonstrate a new method for library design that applies a previously developed bioinformatic method, Statistical Coupling Analysis (SCA). SCA uses homologous enzymes to identify amino acid positions that are mutable and functionally important and engage in synergistic interactions between amino acids. We use SCA to guide a library of the protein luciferase and demonstrate that, in a single round of selection, we can identify luciferase mutants with several valuable properties. Specifically, we identify luciferase mutants that possess both red-shifted emission spectra and improved stability relative to those of the wild-type enzyme. We also identify luciferase mutants that possess a >50-fold change in specificity for modified luciferins. To understand the mutational origin of these improved mutants, we demonstrate the role of mutations at N229, S239, and G246 in altered function. These studies show that SCA can be used to guide library design and rapidly identify synergistic amino acid mutations from a small library.
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
Nakamura, Kosuke; Kondo, Kazunari; Akiyama, Hiroshi; Ishigaki, Takumi; Noguchi, Akio; Katsumata, Hiroshi; Takasaki, Kazuto; Futo, Satoshi; Sakata, Kozue; Fukuda, Nozomi; Mano, Junichi; Kitta, Kazumi; Tanaka, Hidenori; Akashi, Ryo; Nishimaki-Mogami, Tomoko
2016-08-15
Identification of transgenic sequences in an unknown genetically modified (GM) papaya (Carica papaya L.) by whole genome sequence analysis was demonstrated. Whole genome sequence data were generated for a GM-positive fresh papaya fruit commodity detected in monitoring using real-time polymerase chain reaction (PCR). The sequences obtained were mapped against an open database for papaya genome sequence. Transgenic construct- and event-specific sequences were identified as a GM papaya developed to resist infection from a Papaya ringspot virus. Based on the transgenic sequences, a specific real-time PCR detection method for GM papaya applicable to various food commodities was developed. Whole genome sequence analysis enabled identifying unknown transgenic construct- and event-specific sequences in GM papaya and development of a reliable method for detecting them in papaya food commodities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rapid NMR method for the quantification of organic compounds in thin stillage.
Ratanapariyanuch, Kornsulee; Shen, Jianheng; Jia, Yunhua; Tyler, Robert T; Shim, Youn Young; Reaney, Martin J T
2011-10-12
Thin stillage contains organic and inorganic compounds, some of which may be valuable fermentation coproducts. This study describes a thorough analysis of the major solutes present in thin stillage as revealed by NMR and HPLC. The concentration of charged and neutral organic compounds in thin stillage was determined by excitation sculpting NMR methods (double pulse field gradient spin echo). Compounds identified by NMR included isopropanol, ethanol, lactic acid, 1,3-propanediol, acetic acid, succinic acid, glycerophosphorylcholine, betaine, glycerol, and 2-phenylethanol. The concentrations of lactic and acetic acid determined with NMR were comparable to those determined using HPLC. HPLC and NMR were complementary, as more compounds were identified using both methods. NMR analysis revealed that stillage contained the nitrogenous organic compounds betaine and glycerophosphorylcholine, which contributed as much as 24% of the nitrogen present in the stillage. These compounds were not observed by HPLC analysis.
2013-01-01
Background Analysis of global gene expression by DNA microarrays is widely used in experimental molecular biology. However, the complexity of such high-dimensional data sets makes it difficult to fully understand the underlying biological features present in the data. The aim of this study is to introduce a method for DNA microarray analysis that provides an intuitive interpretation of data through dimension reduction and pattern recognition. We present the first “Archetypal Analysis” of global gene expression. The analysis is based on microarray data from five integrated studies of Pseudomonas aeruginosa isolated from the airways of cystic fibrosis patients. Results Our analysis clustered samples into distinct groups with comprehensible characteristics since the archetypes representing the individual groups are closely related to samples present in the data set. Significant changes in gene expression between different groups identified adaptive changes of the bacteria residing in the cystic fibrosis lung. The analysis suggests a similar gene expression pattern between isolates with a high mutation rate (hypermutators) despite accumulation of different mutations for these isolates. This suggests positive selection in the cystic fibrosis lung environment, and changes in gene expression for these isolates are therefore most likely related to adaptation of the bacteria. Conclusions Archetypal analysis succeeded in identifying adaptive changes of P. aeruginosa. The combination of clustering and matrix factorization made it possible to reveal minor similarities among different groups of data, which other analytical methods failed to identify. We suggest that this analysis could be used to supplement current methods used to analyze DNA microarray data. PMID:24059747
Guan, Yong-mei; Jin, Chen; Zhu, Wei-feng; Yang, Ming
2018-01-01
Fermented Cordyceps sinensis, the succedaneum of Cordyceps sinensis which is extracted and separated from Cordyceps sinensis by artificial fermentation, is commonly used in eastern Asia in clinical treatments due to its health benefit. In this paper, a new strategy for differentiating and comprehensively evaluating the quality of products of fermented Cordyceps sinensis has been established, based on high-performance liquid chromatography (HPLC) fingerprint analysis combined with similar analysis (SA), hierarchical cluster analysis (HCA), and the quantitative analysis of multicomponents by single marker (QAMS). Ten common peaks were collected and analysed using SA, HCA, and QAMS. These methods indicated that 30 fermented Cordyceps sinensis samples could be categorized into two groups by HCA. Five peaks were identified as uracil, uridine, adenine, guanosine, and adenosine, and according to the results from the diode array detector, which can be used to confirm peak purity, the purities of these compounds were greater than 990. Adenosine was chosen as the internal reference substance. The relative correction factors (RCF) between adenosine and the other four nucleosides were calculated and investigated using the QAMS method. Meanwhile, the accuracy of the QAMS method was confirmed by comparing the results of that method with those of an external standard method with cosines of the angles between the groups. No significant difference between the two methods was observed. In conclusion, the method established herein was efficient, successful in identifying the products of fermented Cordyceps sinensis, and scientifically valid to be applicable in the systematic quality control of fermented Cordyceps sinensis products. PMID:29850373
Chen, Li-Hua; Wu, Yao; Guan, Yong-Mei; Jin, Chen; Zhu, Wei-Feng; Yang, Ming
2018-01-01
Fermented Cordyceps sinensis , the succedaneum of Cordyceps sinensis which is extracted and separated from Cordyceps sinensis by artificial fermentation, is commonly used in eastern Asia in clinical treatments due to its health benefit. In this paper, a new strategy for differentiating and comprehensively evaluating the quality of products of fermented Cordyceps sinensis has been established, based on high-performance liquid chromatography (HPLC) fingerprint analysis combined with similar analysis (SA), hierarchical cluster analysis (HCA), and the quantitative analysis of multicomponents by single marker (QAMS). Ten common peaks were collected and analysed using SA, HCA, and QAMS. These methods indicated that 30 fermented Cordyceps sinensis samples could be categorized into two groups by HCA. Five peaks were identified as uracil, uridine, adenine, guanosine, and adenosine, and according to the results from the diode array detector, which can be used to confirm peak purity, the purities of these compounds were greater than 990. Adenosine was chosen as the internal reference substance. The relative correction factors (RCF) between adenosine and the other four nucleosides were calculated and investigated using the QAMS method. Meanwhile, the accuracy of the QAMS method was confirmed by comparing the results of that method with those of an external standard method with cosines of the angles between the groups. No significant difference between the two methods was observed. In conclusion, the method established herein was efficient, successful in identifying the products of fermented Cordyceps sinensis , and scientifically valid to be applicable in the systematic quality control of fermented Cordyceps sinensis products.
Identification of speech transients using variable frame rate analysis and wavelet packets.
Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung
2006-01-01
Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.
Heading in the right direction: thermodynamics-based network analysis and pathway engineering.
Ataman, Meric; Hatzimanikatis, Vassily
2015-12-01
Thermodynamics-based network analysis through the introduction of thermodynamic constraints in metabolic models allows a deeper analysis of metabolism and guides pathway engineering. The number and the areas of applications of thermodynamics-based network analysis methods have been increasing in the last ten years. We review recent applications of these methods and we identify the areas that such analysis can contribute significantly, and the needs for future developments. We find that organisms with multiple compartments and extremophiles present challenges for modeling and thermodynamics-based flux analysis. The evolution of current and new methods must also address the issues of the multiple alternatives in flux directionalities and the uncertainties and partial information from analytical methods. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Jorjani, Hadi; Zavolan, Mihaela
2014-04-01
Accurate identification of transcription start sites (TSSs) is an essential step in the analysis of transcription regulatory networks. In higher eukaryotes, the capped analysis of gene expression technology enabled comprehensive annotation of TSSs in genomes such as those of mice and humans. In bacteria, an equivalent approach, termed differential RNA sequencing (dRNA-seq), has recently been proposed, but the application of this approach to a large number of genomes is hindered by the paucity of computational analysis methods. With few exceptions, when the method has been used, annotation of TSSs has been largely done manually. In this work, we present a computational method called 'TSSer' that enables the automatic inference of TSSs from dRNA-seq data. The method rests on a probabilistic framework for identifying both genomic positions that are preferentially enriched in the dRNA-seq data as well as preferentially captured relative to neighboring genomic regions. Evaluating our approach for TSS calling on several publicly available datasets, we find that TSSer achieves high consistency with the curated lists of annotated TSSs, but identifies many additional TSSs. Therefore, TSSer can accelerate genome-wide identification of TSSs in bacterial genomes and can aid in further characterization of bacterial transcription regulatory networks. TSSer is freely available under GPL license at http://www.clipz.unibas.ch/TSSer/index.php
McKenna, J.E.
2003-01-01
The biosphere is filled with complex living patterns and important questions about biodiversity and community and ecosystem ecology are concerned with structure and function of multispecies systems that are responsible for those patterns. Cluster analysis identifies discrete groups within multivariate data and is an effective method of coping with these complexities, but often suffers from subjective identification of groups. The bootstrap testing method greatly improves objective significance determination for cluster analysis. The BOOTCLUS program makes cluster analysis that reliably identifies real patterns within a data set more accessible and easier to use than previously available programs. A variety of analysis options and rapid re-analysis provide a means to quickly evaluate several aspects of a data set. Interpretation is influenced by sampling design and a priori designation of samples into replicate groups, and ultimately relies on the researcher's knowledge of the organisms and their environment. However, the BOOTCLUS program provides reliable, objectively determined groupings of multivariate data.
Dascălu, Cristina Gena; Antohe, Magda Ecaterina
2009-01-01
Based on the eigenvalues and the eigenvectors analysis, the principal component analysis has the purpose to identify the subspace of the main components from a set of parameters, which are enough to characterize the whole set of parameters. Interpreting the data for analysis as a cloud of points, we find through geometrical transformations the directions where the cloud's dispersion is maximal--the lines that pass through the cloud's center of weight and have a maximal density of points around them (by defining an appropriate criteria function and its minimization. This method can be successfully used in order to simplify the statistical analysis on questionnaires--because it helps us to select from a set of items only the most relevant ones, which cover the variations of the whole set of data. For instance, in the presented sample we started from a questionnaire with 28 items and, applying the principal component analysis we identified 7 principal components--or main items--fact that simplifies significantly the further data statistical analysis.
GenSSI 2.0: multi-experiment structural identifiability analysis of SBML models.
Ligon, Thomas S; Fröhlich, Fabian; Chis, Oana T; Banga, Julio R; Balsa-Canto, Eva; Hasenauer, Jan
2018-04-15
Mathematical modeling using ordinary differential equations is used in systems biology to improve the understanding of dynamic biological processes. The parameters of ordinary differential equation models are usually estimated from experimental data. To analyze a priori the uniqueness of the solution of the estimation problem, structural identifiability analysis methods have been developed. We introduce GenSSI 2.0, an advancement of the software toolbox GenSSI (Generating Series for testing Structural Identifiability). GenSSI 2.0 is the first toolbox for structural identifiability analysis to implement Systems Biology Markup Language import, state/parameter transformations and multi-experiment structural identifiability analysis. In addition, GenSSI 2.0 supports a range of MATLAB versions and is computationally more efficient than its previous version, enabling the analysis of more complex models. GenSSI 2.0 is an open-source MATLAB toolbox and available at https://github.com/genssi-developer/GenSSI. thomas.ligon@physik.uni-muenchen.de or jan.hasenauer@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.
Laskin, Julia [Richland, WA; Futrell, Jean H [Richland, WA
2008-04-29
The invention relates to a method and apparatus for enhanced sequencing of complex molecules using surface-induced dissociation (SID) in conjunction with mass spectrometric analysis. Results demonstrate formation of a wide distribution of structure-specific fragments having wide sequence coverage useful for sequencing and identifying the complex molecules.
Analysis of the nature and cause of turbulence upset using airline flight records
NASA Technical Reports Server (NTRS)
Parks, E. K.; Bach, R. E., Jr.; Wingrove, R. C.
1982-01-01
The development and application of methods for determining aircraft motions and related winds, using data normally recorded during airline flight operations, are described. The methods are being developed, in cooperation with the National Transportation Safety Board, to aid in the analysis and understanding of circumstances associated with aircraft accidents or incidents. Data from a recent DC-10 encounter with severe, high-altitude turbulence are used to illustrate the methods. The analysis of this encounter shows the turbulence to be a series of equally spaced horizontal swirls known as 'cat's eyes' vortices. The use of flight-data analysis methods to identify this type of turbulence phenomenon is presented for the first time.
A precipitation regionalization and regime for Iran based on multivariate analysis
NASA Astrophysics Data System (ADS)
Raziei, Tayeb
2018-02-01
Monthly precipitation time series of 155 synoptic stations distributed over Iran, covering 1990-2014 time period, were used to identify areas with different precipitation time variability and regimes utilizing S-mode principal component analysis (PCA) and cluster analysis (CA) preceded by T-mode PCA, respectively. Taking into account the maximum loading values of the rotated components, the first approach revealed five sub-regions characterized by different precipitation time variability, while the second method delineated eight sub-regions featured with different precipitation regimes. The sub-regions identified by the two used methods, although partly overlapping, are different considering their areal extent and complement each other as they are useful for different purposes and applications. Northwestern Iran and the Caspian Sea area were found as the two most distinctive Iranian precipitation sub-regions considering both time variability and precipitation regime since they were well captured with relatively identical areas by the two used approaches. However, the areal extents of the other three sub-regions identified by the first approach were not coincident with the coverage of their counterpart sub-regions defined by the second approach. Results suggest that the precipitation sub-region identified by the two methods would not be necessarily the same, as the first method which accounts for the variance of the data grouped stations with similar temporal variability while the second one which considers a fixed climatology defined by the average over the period 1990-2014 clusters stations having a similar march of monthly precipitation.
Tynkkynen, Soile; Satokari, Reetta; Saarela, Maria; Mattila-Sandholm, Tiina; Saxelin, Maija
1999-01-01
A total of 24 strains, biochemically identified as members of the Lactobacillus casei group, were identified by PCR with species-specific primers. The same set of strains was typed by randomly amplified polymorphic DNA (RAPD) analysis, ribotyping, and pulsed-field gel electrophoresis (PFGE) in order to compare the discriminatory power of the methods. Species-specific primers for L. rhamnosus and L. casei identified the type strain L. rhamnosus ATCC 7469 and the neotype strain L. casei ATCC 334, respectively, but did not give any signal with the recently revived species L. zeae, which contains the type strain ATCC 15820 and the strain ATCC 393, which was previously classified as L. casei. Our results are in accordance with the suggested new classification of the L. casei group. Altogether, 21 of the 24 strains studied were identified with the species-specific primers. In strain typing, PFGE was the most discriminatory method, revealing 17 genotypes for the 24 strains studied. Ribotyping and RAPD analysis yielded 15 and 12 genotypes, respectively. PMID:10473394
Tynkkynen, S; Satokari, R; Saarela, M; Mattila-Sandholm, T; Saxelin, M
1999-09-01
A total of 24 strains, biochemically identified as members of the Lactobacillus casei group, were identified by PCR with species-specific primers. The same set of strains was typed by randomly amplified polymorphic DNA (RAPD) analysis, ribotyping, and pulsed-field gel electrophoresis (PFGE) in order to compare the discriminatory power of the methods. Species-specific primers for L. rhamnosus and L. casei identified the type strain L. rhamnosus ATCC 7469 and the neotype strain L. casei ATCC 334, respectively, but did not give any signal with the recently revived species L. zeae, which contains the type strain ATCC 15820 and the strain ATCC 393, which was previously classified as L. casei. Our results are in accordance with the suggested new classification of the L. casei group. Altogether, 21 of the 24 strains studied were identified with the species-specific primers. In strain typing, PFGE was the most discriminatory method, revealing 17 genotypes for the 24 strains studied. Ribotyping and RAPD analysis yielded 15 and 12 genotypes, respectively.
Shen, Yufeng; Tolić, Nikola; Xie, Fang; Zhao, Rui; Purvine, Samuel O.; Schepmoes, Athena A.; Ronald, J. Moore; Anderson, Gordon A.; Smith, Richard D.
2011-01-01
We report on the effectiveness of CID, HCD, and ETD for LC-FT MS/MS analysis of peptides using a tandem linear ion trap-Orbitrap mass spectrometer. A range of software tools and analysis parameters were employed to explore the use of CID, HCD, and ETD to identify peptides isolated from human blood plasma without the use of specific “enzyme rules”. In the evaluation of an FDR-controlled SEQUEST scoring method, the use of accurate masses for fragments increased the numbers of identified peptides (by ~50%) compared to the use of conventional low accuracy fragment mass information, and CID provided the largest contribution to the identified peptide datasets compared to HCD and ETD. The FDR-controlled Mascot scoring method provided significantly fewer peptide identifications than with SEQUEST (by 1.3–2.3 fold) at the same confidence levels, and CID, HCD, and ETD provided similar contributions to identified peptides. Evaluation of de novo sequencing and the UStags method for more intense fragment ions revealed that HCD afforded more sequence consecutive residues (e.g., ≥7 amino acids) than either CID or ETD. Both the FDR-controlled SEQUEST and Mascot scoring methods provided peptide datasets that were affected by the decoy database and mass tolerances applied (e.g., the identical peptides between the datasets could be limited to ~70%), while the UStags method provided the most consistent peptide datasets (>90% overlap) with extremely low (near zero) numbers of false positive identifications. The m/z ranges in which CID, HCD, and ETD contributed the largest number of peptide identifications were substantially overlapping. This work suggests that the three peptide ion fragmentation methods are complementary, and that maximizing the number of peptide identifications benefits significantly from a careful match with the informatics tools and methods applied. These results also suggest that the decoy strategy may inaccurately estimate identification FDRs. PMID:21678914
Kowalczyk, Marek; Sekuła, Andrzej; Mleczko, Piotr; Olszowy, Zofia; Kujawa, Anna; Zubek, Szymon; Kupiec, Tomasz
2015-01-01
Aim To assess the usefulness of a DNA-based method for identifying mushroom species for application in forensic laboratory practice. Methods Two hundred twenty-one samples of clinical forensic material (dried mushrooms, food remains, stomach contents, feces, etc) were analyzed. ITS2 region of nuclear ribosomal DNA (nrDNA) was sequenced and the sequences were compared with reference sequences collected from the National Center for Biotechnology Information gene bank (GenBank). Sporological identification of mushrooms was also performed for 57 samples of clinical material. Results Of 221 samples, positive sequencing results were obtained for 152 (69%). The highest percentage of positive results was obtained for samples of dried mushrooms (96%) and food remains (91%). Comparison with GenBank sequences enabled identification of all samples at least at the genus level. Most samples (90%) were identified at the level of species or a group of closely related species. Sporological and molecular identification were consistent at the level of species or genus for 30% of analyzed samples. Conclusion Molecular analysis identified a larger number of species than sporological method. It proved to be suitable for analysis of evidential material (dried hallucinogenic mushrooms) in forensic genetic laboratories as well as to complement classical methods in the analysis of clinical material. PMID:25727040
Han, Junwei; Li, Chunquan; Yang, Haixiu; Xu, Yanjun; Zhang, Chunlong; Ma, Jiquan; Shi, Xinrui; Liu, Wei; Shang, Desi; Yao, Qianlan; Zhang, Yunpeng; Su, Fei; Feng, Li; Li, Xia
2015-01-01
Identifying dysregulated pathways from high-throughput experimental data in order to infer underlying biological insights is an important task. Current pathway-identification methods focus on single pathways in isolation; however, consideration of crosstalk between pathways could improve our understanding of alterations in biological states. We propose a novel method of pathway analysis based on global influence (PAGI) to identify dysregulated pathways, by considering both within-pathway effects and crosstalk between pathways. We constructed a global gene–gene network based on the relationships among genes extracted from a pathway database. We then evaluated the extent of differential expression for each gene, and mapped them to the global network. The random walk with restart algorithm was used to calculate the extent of genes affected by global influence. Finally, we used cumulative distribution functions to determine the significance values of the dysregulated pathways. We applied the PAGI method to five cancer microarray datasets, and compared our results with gene set enrichment analysis and five other methods. Based on these analyses, we demonstrated that PAGI can effectively identify dysregulated pathways associated with cancer, with strong reproducibility and robustness. We implemented PAGI using the freely available R-based and Web-based tools (http://bioinfo.hrbmu.edu.cn/PAGI). PMID:25551156
Martin, Jeffrey D.; Eberle, Michael; Nakagaki, Naomi
2011-01-01
This report updates a previously published water-quality dataset of 44 commonly used pesticides and 8 pesticide degradates suitable for a national assessment of trends in pesticide concentrations in streams of the United States. Water-quality samples collected from January 1992 through September 2010 at stream-water sites of the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) Program and the National Stream Quality Accounting Network (NASQAN) were compiled, reviewed, selected, and prepared for trend analysis. The principal steps in data review for trend analysis were to (1) identify analytical schedule, (2) verify sample-level coding, (3) exclude inappropriate samples or results, (4) review pesticide detections per sample, (5) review high pesticide concentrations, and (6) review the spatial and temporal extent of NAWQA pesticide data and selection of analytical methods for trend analysis. The principal steps in data preparation for trend analysis were to (1) select stream-water sites for trend analysis, (2) round concentrations to a consistent level of precision for the concentration range, (3) identify routine reporting levels used to report nondetections unaffected by matrix interference, (4) reassign the concentration value for routine nondetections to the maximum value of the long-term method detection level (maxLT-MDL), (5) adjust concentrations to compensate for temporal changes in bias of recovery of the gas chromatography/mass spectrometry (GCMS) analytical method, and (6) identify samples considered inappropriate for trend analysis. Samples analyzed at the USGS National Water Quality Laboratory (NWQL) by the GCMS analytical method were the most extensive in time and space and, consequently, were selected for trend analysis. Stream-water sites with 3 or more water years of data with six or more samples per year were selected for pesticide trend analysis. The selection criteria described in the report produced a dataset of 21,988 pesticide samples at 212 stream-water sites. Only 21,144 pesticide samples, however, are considered appropriate for trend analysis.
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
Failure mode and effects analysis: a comparison of two common risk prioritisation methods.
McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L
2016-05-01
Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
User Guide for the Financing Alternatives Comparison Tool
FACT is a financial analysis tool that helps identify the most cost-effective method to fund a wastewater or drinking water management project. It creates a comprehensive analysis that compares various financing options.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Identifying inaccuracy of MS Project using system analysis
NASA Astrophysics Data System (ADS)
Fachrurrazi; Husin, Saiful; Malahayati, Nurul; Irzaidi
2018-05-01
The problem encountered in project owner’s financial accounting report is the difference in total project costs of MS Project to the Indonesian Standard (Standard Indonesia Standard / Cost Estimating Standard Book of Indonesia). It is one of the MS Project problems concerning to its cost accuracy, so cost data cannot be used in an integrated way for all project components. This study focuses on finding the causes of inaccuracy of the MS Projects. The aim of this study, which is operationally, are: (i) identifying cost analysis procedures for both current methods (SNI) and MS Project; (ii) identifying cost bias in each element of the cost analysis procedure; and (iii) analysing the cost differences (cost bias) in each element to identify what the cause of inaccuracies in MS Project toward SNI is. The method in this study is comparing for both the system analysis of MS Project and SNI. The results are: (i) MS Project system in Work of Resources element has limitation for two decimal digits only, have led to its inaccuracy. Where the Work of Resources (referred to as effort) in MS Project represents multiplication between the Quantities of Activities and Requirements of resources in SNI; (ii) MS Project and SNI have differences in the costing methods (the cost estimation methods), in which the SNI uses the Quantity-Based Costing (QBC), meanwhile MS Project uses the Time-Based Costing (TBC). Based on this research, we recommend to the contractors who use SNI should make an adjustment for Work of Resources in MS Project (with correction index) so that it can be used in an integrated way to the project owner’s financial accounting system. Further research will conduct for improvement the MS Project as an integrated tool toward all part of the project participant.
Adélie Penguin Population Diet Monitoring by Analysis of Food DNA in Scats
Jarman, Simon N.; McInnes, Julie C.; Faux, Cassandra; Polanowski, Andrea M.; Marthick, James; Deagle, Bruce E.; Southwell, Colin; Emmerson, Louise
2013-01-01
The Adélie penguin is the most important animal currently used for ecosystem monitoring in the Southern Ocean. The diet of this species is generally studied by visual analysis of stomach contents; or ratios of isotopes of carbon and nitrogen incorporated into the penguin from its food. There are significant limitations to the information that can be gained from these methods. We evaluated population diet assessment by analysis of food DNA in scats as an alternative method for ecosystem monitoring with Adélie penguins as an indicator species. Scats were collected at four locations, three phases of the breeding cycle, and in four different years. A novel molecular diet assay and bioinformatics pipeline based on nuclear small subunit ribosomal RNA gene (SSU rDNA) sequencing was used to identify prey DNA in 389 scats. Analysis of the twelve population sample sets identified spatial and temporal dietary change in Adélie penguin population diet. Prey diversity was found to be greater than previously thought. Krill, fish, copepods and amphipods were the most important food groups, in general agreement with other Adélie penguin dietary studies based on hard part or stable isotope analysis. However, our DNA analysis estimated that a substantial portion of the diet was gelatinous groups such as jellyfish and comb jellies. A range of other prey not previously identified in the diet of this species were also discovered. The diverse prey identified by this DNA-based scat analysis confirms that the generalist feeding of Adélie penguins makes them a useful indicator species for prey community composition in the coastal zone of the Southern Ocean. Scat collection is a simple and non-invasive field sampling method that allows DNA-based estimation of prey community differences at many temporal and spatial scales and provides significant advantages over alternative diet analysis approaches. PMID:24358158
Adélie penguin population diet monitoring by analysis of food DNA in scats.
Jarman, Simon N; McInnes, Julie C; Faux, Cassandra; Polanowski, Andrea M; Marthick, James; Deagle, Bruce E; Southwell, Colin; Emmerson, Louise
2013-01-01
The Adélie penguin is the most important animal currently used for ecosystem monitoring in the Southern Ocean. The diet of this species is generally studied by visual analysis of stomach contents; or ratios of isotopes of carbon and nitrogen incorporated into the penguin from its food. There are significant limitations to the information that can be gained from these methods. We evaluated population diet assessment by analysis of food DNA in scats as an alternative method for ecosystem monitoring with Adélie penguins as an indicator species. Scats were collected at four locations, three phases of the breeding cycle, and in four different years. A novel molecular diet assay and bioinformatics pipeline based on nuclear small subunit ribosomal RNA gene (SSU rDNA) sequencing was used to identify prey DNA in 389 scats. Analysis of the twelve population sample sets identified spatial and temporal dietary change in Adélie penguin population diet. Prey diversity was found to be greater than previously thought. Krill, fish, copepods and amphipods were the most important food groups, in general agreement with other Adélie penguin dietary studies based on hard part or stable isotope analysis. However, our DNA analysis estimated that a substantial portion of the diet was gelatinous groups such as jellyfish and comb jellies. A range of other prey not previously identified in the diet of this species were also discovered. The diverse prey identified by this DNA-based scat analysis confirms that the generalist feeding of Adélie penguins makes them a useful indicator species for prey community composition in the coastal zone of the Southern Ocean. Scat collection is a simple and non-invasive field sampling method that allows DNA-based estimation of prey community differences at many temporal and spatial scales and provides significant advantages over alternative diet analysis approaches.
Aiba, Toshiki; Saito, Toshiyuki; Hayashi, Akiko; Sato, Shinji; Yunokawa, Harunobu; Maruyama, Toru; Fujibuchi, Wataru; Kurita, Hisaka; Tohyama, Chiharu; Ohsako, Seiichiroh
2017-03-09
It has been pointed out that environmental factors or chemicals can cause diseases that are developmental in origin. To detect abnormal epigenetic alterations in DNA methylation, convenient and cost-effective methods are required for such research, in which multiple samples are processed simultaneously. We here present methylated site display (MSD), a unique technique for the preparation of DNA libraries. By combining it with amplified fragment length polymorphism (AFLP) analysis, we developed a new method, MSD-AFLP. Methylated site display libraries consist of only DNAs derived from DNA fragments that are CpG methylated at the 5' end in the original genomic DNA sample. To test the effectiveness of this method, CpG methylation levels in liver, kidney, and hippocampal tissues of mice were compared to examine if MSD-AFLP can detect subtle differences in the levels of tissue-specific differentially methylated CpGs. As a result, many CpG sites suspected to be tissue-specific differentially methylated were detected. Nucleotide sequences adjacent to these methyl-CpG sites were identified and we determined the methylation level by methylation-sensitive restriction endonuclease (MSRE)-PCR analysis to confirm the accuracy of AFLP analysis. The differences of the methylation level among tissues were almost identical among these methods. By MSD-AFLP analysis, we detected many CpGs showing less than 5% statistically significant tissue-specific difference and less than 10% degree of variability. Additionally, MSD-AFLP analysis could be used to identify CpG methylation sites in other organisms including humans. MSD-AFLP analysis can potentially be used to measure slight changes in CpG methylation level. Regarding the remarkable precision, sensitivity, and throughput of MSD-AFLP analysis studies, this method will be advantageous in a variety of epigenetics-based research.
Identification of FGF7 as a novel susceptibility locus for chronic obstructive pulmonary disease.
Brehm, John M; Hagiwara, Koichi; Tesfaigzi, Yohannes; Bruse, Shannon; Mariani, Thomas J; Bhattacharya, Soumyaroop; Boutaoui, Nadia; Ziniti, John P; Soto-Quiros, Manuel E; Avila, Lydiana; Cho, Michael H; Himes, Blanca; Litonjua, Augusto A; Jacobson, Francine; Bakke, Per; Gulsvik, Amund; Anderson, Wayne H; Lomas, David A; Forno, Erick; Datta, Soma; Silverman, Edwin K; Celedón, Juan C
2011-12-01
Traditional genome-wide association studies (GWASs) of large cohorts of subjects with chronic obstructive pulmonary disease (COPD) have successfully identified novel candidate genes, but several other plausible loci do not meet strict criteria for genome-wide significance after correction for multiple testing. The authors hypothesise that by applying unbiased weights derived from unique populations we can identify additional COPD susceptibility loci. Methods The authors performed a homozygosity haplotype analysis on a group of subjects with and without COPD to identify regions of conserved homozygosity haplotype (RCHHs). Weights were constructed based on the frequency of these RCHHs in case versus controls, and used to adjust the p values from a large collaborative GWAS of COPD. The authors identified 2318 RCHHs, of which 576 were significantly (p<0.05) over-represented in cases. After applying the weights constructed from these regions to a collaborative GWAS of COPD, the authors identified two single nucleotide polymorphisms (SNPs) in a novel gene (fibroblast growth factor-7 (FGF7)) that gained genome-wide significance by the false discovery rate method. In a follow-up analysis, both SNPs (rs12591300 and rs4480740) were significantly associated with COPD in an independent population (combined p values of 7.9E-7 and 2.8E-6, respectively). In another independent population, increased lung tissue FGF7 expression was associated with worse measures of lung function. Weights constructed from a homozygosity haplotype analysis of an isolated population successfully identify novel genetic associations from a GWAS on a separate population. This method can be used to identify promising candidate genes that fail to meet strict correction for multiple testing.
Operations planning and analysis handbook for NASA/MSFC phase B development projects
NASA Technical Reports Server (NTRS)
Batson, Robert C.
1986-01-01
Current operations planning and analysis practices on NASA/MSFC Phase B projects were investigated with the objectives of (1) formalizing these practices into a handbook and (2) suggesting improvements. The study focused on how Science and Engineering (S&E) Operational Personnel support Program Development (PD) Task Teams. The intimate relationship between systems engineering and operations analysis was examined. Methods identified for use by operations analysts during Phase B include functional analysis, interface analysis methods to calculate/allocate such criteria as reliability, Maintainability, and operations and support cost.
Park, Hyunseok; Magee, Christopher L
2017-01-01
The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.
2017-01-01
The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304
Bridges, John F P; Hauber, A Brett; Marshall, Deborah; Lloyd, Andrew; Prosser, Lisa A; Regier, Dean A; Johnson, F Reed; Mauskopf, Josephine
2011-06-01
The application of conjoint analysis (including discrete-choice experiments and other multiattribute stated-preference methods) in health has increased rapidly over the past decade. A wider acceptance of these methods is limited by an absence of consensus-based methodological standards. The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Good Research Practices for Conjoint Analysis Task Force was established to identify good research practices for conjoint-analysis applications in health. The task force met regularly to identify the important steps in a conjoint analysis, to discuss good research practices for conjoint analysis, and to develop and refine the key criteria for identifying good research practices. ISPOR members contributed to this process through an extensive consultation process. A final consensus meeting was held to revise the article using these comments, and those of a number of international reviewers. Task force findings are presented as a 10-item checklist covering: 1) research question; 2) attributes and levels; 3) construction of tasks; 4) experimental design; 5) preference elicitation; 6) instrument design; 7) data-collection plan; 8) statistical analyses; 9) results and conclusions; and 10) study presentation. A primary question relating to each of the 10 items is posed, and three sub-questions examine finer issues within items. Although the checklist should not be interpreted as endorsing any specific methodological approach to conjoint analysis, it can facilitate future training activities and discussions of good research practices for the application of conjoint-analysis methods in health care studies. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Identification and Assessment of Taiwanese Children's Conceptions of Learning Mathematics
ERIC Educational Resources Information Center
Chiu, Mei-Shiu
2012-01-01
The aim of the present study was to identify children's conceptions of learning mathematics and to assess the identified conceptions. Children's conceptions are identified by interviewing 73 grade 5 students in Taiwan. The interviews are analyzed using qualitative data analysis methods, which results in a structure of 5 major conceptions, each…
A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.
Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C
2017-07-01
Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.
Manoharan, Prabu; Ghoshal, Nanda
2018-05-01
Traditional structure-based virtual screening method to identify drug-like small molecules for BACE1 is so far unsuccessful. Location of BACE1, poor Blood Brain Barrier permeability and P-glycoprotein (Pgp) susceptibility of the inhibitors make it even more difficult. Fragment-based drug design method is suitable for efficient optimization of initial hit molecules for target like BACE1. We have developed a fragment-based virtual screening approach to identify/optimize the fragment molecules as a starting point. This method combines the shape, electrostatic, and pharmacophoric features of known fragment molecules, bound to protein conjugate crystal structure, and aims to identify both chemically and energetically feasible small fragment ligands that bind to BACE1 active site. The two top-ranked fragment hits were subjected for a 53 ns MD simulation. Principle component analysis and free energy landscape analysis reveal that the new ligands show the characteristic features of established BACE1 inhibitors. The potent method employed in this study may serve for the development of potential lead molecules for BACE1-directed Alzheimer's disease therapeutics.
Tsuchiya, Megumi; Karim, M Rezaul; Matsumoto, Taro; Ogawa, Hidesato; Taniguchi, Hiroaki
2017-01-24
Transcriptional coregulators are vital to the efficient transcriptional regulation of nuclear chromatin structure. Coregulators play a variety of roles in regulating transcription. These include the direct interaction with transcription factors, the covalent modification of histones and other proteins, and the occasional chromatin conformation alteration. Accordingly, establishing relatively quick methods for identifying proteins that interact within this network is crucial to enhancing our understanding of the underlying regulatory mechanisms. LC-MS/MS-mediated protein binding partner identification is a validated technique used to analyze protein-protein interactions. By immunoprecipitating a previously-identified member of a protein complex with an antibody (occasionally with an antibody for a tagged protein), it is possible to identify its unknown protein interactions via mass spectrometry analysis. Here, we present a method of protein preparation for the LC-MS/MS-mediated high-throughput identification of protein interactions involving nuclear cofactors and their binding partners. This method allows for a better understanding of the transcriptional regulatory mechanisms of the targeted nuclear factors.
A Fuzzy Computing Model for Identifying Polarity of Chinese Sentiment Words
Huang, Yongfeng; Wu, Xian; Li, Xing
2015-01-01
With the spurt of online user-generated contents on web, sentiment analysis has become a very active research issue in data mining and natural language processing. As the most important indicator of sentiment, sentiment words which convey positive and negative polarity are quite instrumental for sentiment analysis. However, most of the existing methods for identifying polarity of sentiment words only consider the positive and negative polarity by the Cantor set, and no attention is paid to the fuzziness of the polarity intensity of sentiment words. In order to improve the performance, we propose a fuzzy computing model to identify the polarity of Chinese sentiment words in this paper. There are three major contributions in this paper. Firstly, we propose a method to compute polarity intensity of sentiment morphemes and sentiment words. Secondly, we construct a fuzzy sentiment classifier and propose two different methods to compute the parameter of the fuzzy classifier. Thirdly, we conduct extensive experiments on four sentiment words datasets and three review datasets, and the experimental results indicate that our model performs better than the state-of-the-art methods. PMID:26106409
A Model-Based Joint Identification of Differentially Expressed Genes and Phenotype-Associated Genes
Seo, Minseok; Shin, Su-kyung; Kwon, Eun-Young; Kim, Sung-Eun; Bae, Yun-Jung; Lee, Seungyeoun; Sung, Mi-Kyung; Choi, Myung-Sook; Park, Taesung
2016-01-01
Over the last decade, many analytical methods and tools have been developed for microarray data. The detection of differentially expressed genes (DEGs) among different treatment groups is often a primary purpose of microarray data analysis. In addition, association studies investigating the relationship between genes and a phenotype of interest such as survival time are also popular in microarray data analysis. Phenotype association analysis provides a list of phenotype-associated genes (PAGs). However, it is sometimes necessary to identify genes that are both DEGs and PAGs. We consider the joint identification of DEGs and PAGs in microarray data analyses. The first approach we used was a naïve approach that detects DEGs and PAGs separately and then identifies the genes in an intersection of the list of PAGs and DEGs. The second approach we considered was a hierarchical approach that detects DEGs first and then chooses PAGs from among the DEGs or vice versa. In this study, we propose a new model-based approach for the joint identification of DEGs and PAGs. Unlike the previous two-step approaches, the proposed method identifies genes simultaneously that are DEGs and PAGs. This method uses standard regression models but adopts different null hypothesis from ordinary regression models, which allows us to perform joint identification in one-step. The proposed model-based methods were evaluated using experimental data and simulation studies. The proposed methods were used to analyze a microarray experiment in which the main interest lies in detecting genes that are both DEGs and PAGs, where DEGs are identified between two diet groups and PAGs are associated with four phenotypes reflecting the expression of leptin, adiponectin, insulin-like growth factor 1, and insulin. Model-based approaches provided a larger number of genes, which are both DEGs and PAGs, than other methods. Simulation studies showed that they have more power than other methods. Through analysis of data from experimental microarrays and simulation studies, the proposed model-based approach was shown to provide a more powerful result than the naïve approach and the hierarchical approach. Since our approach is model-based, it is very flexible and can easily handle different types of covariates. PMID:26964035
2013-01-01
Background Many large-scale studies analyzed high-throughput genomic data to identify altered pathways essential to the development and progression of specific types of cancer. However, no previous study has been extended to provide a comprehensive analysis of pathways disrupted by copy number alterations across different human cancers. Towards this goal, we propose a network-based method to integrate copy number alteration data with human protein-protein interaction networks and pathway databases to identify pathways that are commonly disrupted in many different types of cancer. Results We applied our approach to a data set of 2,172 cancer patients across 16 different types of cancers, and discovered a set of commonly disrupted pathways, which are likely essential for tumor formation in majority of the cancers. We also identified pathways that are only disrupted in specific cancer types, providing molecular markers for different human cancers. Analysis with independent microarray gene expression datasets confirms that the commonly disrupted pathways can be used to identify patient subgroups with significantly different survival outcomes. We also provide a network view of disrupted pathways to explain how copy number alterations affect pathways that regulate cell growth, cycle, and differentiation for tumorigenesis. Conclusions In this work, we demonstrated that the network-based integrative analysis can help to identify pathways disrupted by copy number alterations across 16 types of human cancers, which are not readily identifiable by conventional overrepresentation-based and other pathway-based methods. All the results and source code are available at http://compbio.cs.umn.edu/NetPathID/. PMID:23822816
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
Automatic comic page image understanding based on edge segment analysis
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai
2013-12-01
Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.
Zhou, Fei; Zhao, Yajing; Peng, Jiyu; Jiang, Yirong; Li, Maiquan; Jiang, Yuan; Lu, Baiyi
2017-07-01
Osmanthus fragrans flowers are used as folk medicine and additives for teas, beverages and foods. The metabolites of O. fragrans flowers from different geographical origins were inconsistent in some extent. Chromatography and mass spectrometry combined with multivariable analysis methods provides an approach for discriminating the origin of O. fragrans flowers. To discriminate the Osmanthus fragrans var. thunbergii flowers from different origins with the identified metabolites. GC-MS and UPLC-PDA were conducted to analyse the metabolites in O. fragrans var. thunbergii flowers (in total 150 samples). Principal component analysis (PCA), soft independent modelling of class analogy analysis (SIMCA) and random forest (RF) analysis were applied to group the GC-MS and UPLC-PDA data. GC-MS identified 32 compounds common to all samples while UPLC-PDA/QTOF-MS identified 16 common compounds. PCA of the UPLC-PDA data generated a better clustering than PCA of the GC-MS data. Ten metabolites (six from GC-MS and four from UPLC-PDA) were selected as effective compounds for discrimination by PCA loadings. SIMCA and RF analysis were used to build classification models, and the RF model, based on the four effective compounds (caffeic acid derivative, acteoside, ligustroside and compound 15), yielded better results with the classification rate of 100% in the calibration set and 97.8% in the prediction set. GC-MS and UPLC-PDA combined with multivariable analysis methods can discriminate the origin of Osmanthus fragrans var. thunbergii flowers. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Support vector machine based classification of fast Fourier transform spectroscopy of proteins
NASA Astrophysics Data System (ADS)
Lazarevic, Aleksandar; Pokrajac, Dragoljub; Marcano, Aristides; Melikechi, Noureddine
2009-02-01
Fast Fourier transform spectroscopy has proved to be a powerful method for study of the secondary structure of proteins since peak positions and their relative amplitude are affected by the number of hydrogen bridges that sustain this secondary structure. However, to our best knowledge, the method has not been used yet for identification of proteins within a complex matrix like a blood sample. The principal reason is the apparent similarity of protein infrared spectra with actual differences usually masked by the solvent contribution and other interactions. In this paper, we propose a novel machine learning based method that uses protein spectra for classification and identification of such proteins within a given sample. The proposed method uses principal component analysis (PCA) to identify most important linear combinations of original spectral components and then employs support vector machine (SVM) classification model applied on such identified combinations to categorize proteins into one of given groups. Our experiments have been performed on the set of four different proteins, namely: Bovine Serum Albumin, Leptin, Insulin-like Growth Factor 2 and Osteopontin. Our proposed method of applying principal component analysis along with support vector machines exhibits excellent classification accuracy when identifying proteins using their infrared spectra.
Semi-supervised word polarity identification in resource-lean languages.
Dehdarbehbahani, Iman; Shakery, Azadeh; Faili, Heshaam
2014-10-01
Sentiment words, as fundamental constitutive parts of subjective sentences, have a substantial effect on analysis of opinions, emotions and beliefs. Most of the proposed methods for identifying the semantic orientations of words exploit rich linguistic resources such as WordNet, subjectivity corpora, or polarity tagged words. Shortage of such linguistic resources in resource-lean languages affects the performance of word polarity identification in these languages. In this paper, we present a method which exploits a language with rich subjectivity analysis resources (English) to identify the polarity of words in a resource-lean foreign language. The English WordNet and a sparse foreign WordNet infrastructure are used to create a heterogeneous, multilingual and weighted semantic network. To identify the semantic orientation of foreign words, a random walk based method is applied to the semantic network along with a set of automatically weighted English positive and negative seeds. In a post-processing phase, synonym and antonym relations in the foreign WordNet are used to filter the random walk results. Our experiments on English and Persian languages show that the proposed method can outperform state-of-the-art word polarity identification methods in both languages. Copyright © 2014 Elsevier Ltd. All rights reserved.
Prioritizing individual genetic variants after kernel machine testing using variable selection.
He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C
2016-12-01
Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.
NASA Technical Reports Server (NTRS)
Prost, L.; Pauillac, A.
1978-01-01
Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.
Heuristics to Facilitate Understanding of Discriminant Analysis.
ERIC Educational Resources Information Center
Van Epps, Pamela D.
This paper discusses the principles underlying discriminant analysis and constructs a simulated data set to illustrate its methods. Discriminant analysis is a multivariate technique for identifying the best combination of variables to maximally discriminate between groups. Discriminant functions are established on existing groups and used to…
EXPLORING FUNCTIONAL CONNECTIVITY IN FMRI VIA CLUSTERING.
Venkataraman, Archana; Van Dijk, Koene R A; Buckner, Randy L; Golland, Polina
2009-04-01
In this paper we investigate the use of data driven clustering methods for functional connectivity analysis in fMRI. In particular, we consider the K-Means and Spectral Clustering algorithms as alternatives to the commonly used Seed-Based Analysis. To enable clustering of the entire brain volume, we use the Nyström Method to approximate the necessary spectral decompositions. We apply K-Means, Spectral Clustering and Seed-Based Analysis to resting-state fMRI data collected from 45 healthy young adults. Without placing any a priori constraints, both clustering methods yield partitions that are associated with brain systems previously identified via Seed-Based Analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.
Yu, Marcia M L; Sandercock, P Mark L
2012-01-01
During the forensic examination of textile fibers, fibers are usually mounted on glass slides for visual inspection and identification under the microscope. One method that has the capability to accurately identify single textile fibers without subsequent demounting is Raman microspectroscopy. The effect of the mountant Entellan New on the Raman spectra of fibers was investigated to determine if it is suitable for fiber analysis. Raman spectra of synthetic fibers mounted in three different ways were collected and subjected to multivariate analysis. Principal component analysis score plots revealed that while spectra from different fiber classes formed distinct groups, fibers of the same class formed a single group regardless of the mounting method. The spectra of bare fibers and those mounted in Entellan New were found to be statistically indistinguishable by analysis of variance calculations. These results demonstrate that fibers mounted in Entellan New may be identified directly by Raman microspectroscopy without further sample preparation. © 2011 American Academy of Forensic Sciences.
Wang, Tianyu; Nabavi, Sheida
2018-04-24
Differential gene expression analysis is one of the significant efforts in single cell RNA sequencing (scRNAseq) analysis to discover the specific changes in expression levels of individual cell types. Since scRNAseq exhibits multimodality, large amounts of zero counts, and sparsity, it is different from the traditional bulk RNA sequencing (RNAseq) data. The new challenges of scRNAseq data promote the development of new methods for identifying differentially expressed (DE) genes. In this study, we proposed a new method, SigEMD, that combines a data imputation approach, a logistic regression model and a nonparametric method based on the Earth Mover's Distance, to precisely and efficiently identify DE genes in scRNAseq data. The regression model and data imputation are used to reduce the impact of large amounts of zero counts, and the nonparametric method is used to improve the sensitivity of detecting DE genes from multimodal scRNAseq data. By additionally employing gene interaction network information to adjust the final states of DE genes, we further reduce the false positives of calling DE genes. We used simulated datasets and real datasets to evaluate the detection accuracy of the proposed method and to compare its performance with those of other differential expression analysis methods. Results indicate that the proposed method has an overall powerful performance in terms of precision in detection, sensitivity, and specificity. Copyright © 2018 Elsevier Inc. All rights reserved.
Development of a graphical method for choosing the optimal mode of traffic light
NASA Astrophysics Data System (ADS)
Novikov, A. N.; Katunin, A. A.; Novikov, I. A.; Kravchenko, A. A.; Shevtsova, A. G.
2018-05-01
Changing the transportation infrastructure for improving the main characteristics of the transportation flow is the key problem in transportation planning, therefore the main question lies in the ability to plan the change of the main indicators for the long term. In this investigation, an analysis of the city’s population has been performed and the most difficult transportation segment has been identified. During its identification, the main characteristics of the transportation flow have been established. For the evaluation of these characteristics until 2025, an analysis of the available methods of establishing changes in their values has been conducted. During the analysis of the above mentioned methods of evaluation of the change in intensity, based on the method of extrapolation, three scenarios of the development of the transportation system have been identified. It has been established that the most favorable method of controlling the transportation flow in the entrance to the city is the long term control of the traffic system. For the first time, with the help of the authors, based on the investigations of foreign scientists and the mathematical analysis of the changes in intensiveness on the main routes of the given road, the method of graphically choosing the required control plan has been put forward. The effectiveness of said organization scheme of the transportation system has been rated in the Transyt-14 program, with the analysis of changes in the main characteristics of the transportation flow.
Proteomic analysis of mare follicular fluid during late follicle development
2011-01-01
Background Follicular fluid accumulates into the antrum of follicle from the early stage of follicle development. Studies on its components may contribute to a better understanding of the mechanisms underlying follicular development and oocyte quality. With this objective, we performed a proteomic analysis of mare follicular fluid. First, we hypothesized that proteins in follicular fluid may differ from those in the serum, and also may change during follicle development. Second, we used four different approaches of Immunodepletion and one enrichment method, in order to overcome the masking effect of high-abundance proteins present in the follicular fluid, and to identify those present in lower abundance. Finally, we compared our results with previous studies performed in mono-ovulant (human) and poly-ovulant (porcine and canine) species in an attempt to identify common and/or species-specific proteins. Methods Follicular fluid samples were collected from ovaries at three different stages of follicle development (early dominant, late dominant and preovulatory). Blood samples were also collected at each time. The proteomic analysis was carried out on crude, depleted and enriched follicular fluid by 2D-PAGE, 1D-PAGE and mass spectrometry. Results Total of 459 protein spots were visualized by 2D-PAGE of crude mare follicular fluid, with no difference among the three physiological stages. Thirty proteins were observed as differentially expressed between serum and follicular fluid. Enrichment method was found to be the most powerful method for detection and identification of low-abundance proteins from follicular fluid. Actually, we were able to identify 18 proteins in the crude follicular fluid, and as many as 113 in the enriched follicular fluid. Inhibins and a few other proteins involved in reproduction could only be identified after enrichment of follicular fluid, demonstrating the power of the method used. The comparison of proteins found in mare follicular fluid with proteins previously identified in human, porcine and canine follicular fluids, led to the identification of 12 common proteins and of several species-specific proteins. Conclusions This study provides the first description of mare follicular fluid proteome during the late follicle development stages. We identified several proteins from crude, depleted and enriched follicular fluid. Our results demonstrate that the enrichment method, combined with 2D-PAGE and mass spectrometry, can be successfully used to visualize and further identify the low-abundance proteins in the follicular fluid. PMID:21923925
The method of trend analysis of parameters time series of gas-turbine engine state
NASA Astrophysics Data System (ADS)
Hvozdeva, I.; Myrhorod, V.; Derenh, Y.
2017-10-01
This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.
ERIC Educational Resources Information Center
Barton, Erin E.; Pustejovsky, James E.; Maggin, Daniel M.; Reichow, Brian
2017-01-01
The adoption of methods and strategies validated through rigorous, experimentally oriented research is a core professional value of special education. We conducted a systematic review and meta-analysis examining the experimental literature on Technology-Aided Instruction and Intervention (TAII) using research identified as part of the National…
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Shape classification of wear particles by image boundary analysis using machine learning algorithms
NASA Astrophysics Data System (ADS)
Yuan, Wei; Chin, K. S.; Hua, Meng; Dong, Guangneng; Wang, Chunhui
2016-05-01
The shape features of wear particles generated from wear track usually contain plenty of information about the wear states of a machinery operational condition. Techniques to quickly identify types of wear particles quickly to respond to the machine operation and prolong the machine's life appear to be lacking and are yet to be established. To bridge rapid off-line feature recognition with on-line wear mode identification, this paper presents a new radial concave deviation (RCD) method that mainly involves the use of the particle boundary signal to analyze wear particle features. Signal output from the RCDs subsequently facilitates the determination of several other feature parameters, typically relevant to the shape and size of the wear particle. Debris feature and type are identified through the use of various classification methods, such as linear discriminant analysis, quadratic discriminant analysis, naïve Bayesian method, and classification and regression tree method (CART). The average errors of the training and test via ten-fold cross validation suggest CART is a highly suitable approach for classifying and analyzing particle features. Furthermore, the results of the wear debris analysis enable the maintenance team to diagnose faults appropriately.
Analysis of flavor compounds by GC/MS after liquid-liquid extraction from fruit juices
NASA Astrophysics Data System (ADS)
Tuşa, F. D.; Moldovan, Z.; Schmutzer, G.; Magdaş, D. A.; Dehelean, A.; Vlassa, M.
2012-02-01
In this work we describe a rapid method for analysis of volatile profiles of several commercial fruit juices using GC/MS instrument after liquid-liquid extraction. Volatile flavor compounds have been identified based on mass spectrum obtained in EI mode. This method allows to analyses a wide range of flavor compounds (esters, aldehydes, alcohols, terpenoids) the procedure was rapid, simple and inexpensive. Moreover, by means of volatile compounds it could be possible to distinguish between juices of organic and conventional production and those with flavorings addition. More of 20 compounds were identified and quantified as relative chromatogram area taken on larges ion in mass spectrum.
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
Symbiotic Fungus of Marine Sponge Axinella sp. Producing Antibacterial Agent
NASA Astrophysics Data System (ADS)
Trianto, A.; Widyaningsih, S.; Radjasa, OK; Pribadi, R.
2017-02-01
The emerging of multidrug resistance pathogenic bacteria cause the treatment of the diseaseshave become ineffective. There for, invention of a new drug with novel mode of action is an essential for curing the disease caused by an MDR pathogen. Marine fungi is prolific source of bioactive compound that has not been well explored. This study aim to obtain the marine sponges-associated fungus that producing anti-MDR bacteria substaces. We collected the sponge from Riung water, NTT, Indonesia. The fungus was isolated with affixed method, followed with purification with streak method. The overlay and disk diffusion agar methods were applied for bioactivity test for the isolate and the extract, respectively. Molecular analysis was employed for identification of the isolate. The sponge was identified based on morphological and spicular analysis. The ovelay test showed that the isolate KN15-3 active against the MDR Staphylococcus aureus and Eschericia coli. The extract of the cultured KN15-3 was also inhibited the S. aureus and E. coli with inhibition zone 2.95 mm and 4.13 mm, respectively. Based on the molecular analysis, the fungus was identified as Aspergillus sydowii. While the sponge was identified as Axinella sp.
Correlative and multivariate analysis of increased radon concentration in underground laboratory.
Maletić, Dimitrije M; Udovičić, Vladimir I; Banjanac, Radomir M; Joković, Dejan R; Dragić, Aleksandar L; Veselinović, Nikola B; Filipović, Jelena
2014-11-01
The results of analysis using correlative and multivariate methods, as developed for data analysis in high-energy physics and implemented in the Toolkit for Multivariate Analysis software package, of the relations of the variation of increased radon concentration with climate variables in shallow underground laboratory is presented. Multivariate regression analysis identified a number of multivariate methods which can give a good evaluation of increased radon concentrations based on climate variables. The use of the multivariate regression methods will enable the investigation of the relations of specific climate variable with increased radon concentrations by analysis of regression methods resulting in 'mapped' underlying functional behaviour of radon concentrations depending on a wide spectrum of climate variables. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Analysis and methods of improvement of safety at high-speed rural intersections.
DOT National Transportation Integrated Search
2012-04-01
Since 2006, INDOT has been preparing an annual fivepercent report that identifies intersections and segments on Indiana : state roads that require attention due to the excessive number and severity of crashes. Many of the identified intersections ...
NASA Astrophysics Data System (ADS)
O'Shea, Bethany; Jankowski, Jerzy
2006-12-01
The major ion composition of Great Artesian Basin groundwater in the lower Namoi River valley is relatively homogeneous in chemical composition. Traditional graphical techniques have been combined with multivariate statistical methods to determine whether subtle differences in the chemical composition of these waters can be delineated. Hierarchical cluster analysis and principal components analysis were successful in delineating minor variations within the groundwaters of the study area that were not visually identified in the graphical techniques applied. Hydrochemical interpretation allowed geochemical processes to be identified in each statistically defined water type and illustrated how these groundwaters differ from one another. Three main geochemical processes were identified in the groundwaters: ion exchange, precipitation, and mixing between waters from different sources. Both statistical methods delineated an anomalous sample suspected of being influenced by magmatic CO2 input. The use of statistical methods to complement traditional graphical techniques for waters appearing homogeneous is emphasized for all investigations of this type. Copyright
PSEA: Kinase-specific prediction and analysis of human phosphorylation substrates
NASA Astrophysics Data System (ADS)
Suo, Sheng-Bao; Qiu, Jian-Ding; Shi, Shao-Ping; Chen, Xiang; Liang, Ru-Ping
2014-03-01
Protein phosphorylation catalysed by kinases plays crucial regulatory roles in intracellular signal transduction. With the increasing number of kinase-specific phosphorylation sites and disease-related phosphorylation substrates that have been identified, the desire to explore the regulatory relationship between protein kinases and disease-related phosphorylation substrates is motivated. In this work, we analysed the kinases' characteristic of all disease-related phosphorylation substrates by using our developed Phosphorylation Set Enrichment Analysis (PSEA) method. We evaluated the efficiency of our method with independent test and concluded that our approach is reliable for identifying kinases responsible for phosphorylated substrates. In addition, we found that Mitogen-activated protein kinase (MAPK) and Glycogen synthase kinase (GSK) families are more associated with abnormal phosphorylation. It can be anticipated that our method might be helpful to identify the mechanism of phosphorylation and the relationship between kinase and phosphorylation related diseases. A user-friendly web interface is now freely available at http://bioinfo.ncu.edu.cn/PKPred_Home.aspx.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Richardson, Rodney T; Lin, Chia-Hua; Sponsler, Douglas B; Quijia, Juan O; Goodell, Karen; Johnson, Reed M
2015-01-01
Melissopalynology, the identification of bee-collected pollen, provides insight into the flowers exploited by foraging bees. Information provided by melissopalynology could guide floral enrichment efforts aimed at supporting pollinators, but it has rarely been used because traditional methods of pollen identification are laborious and require expert knowledge. We approach melissopalynology in a novel way, employing a molecular method to study the pollen foraging of honey bees (Apis mellifera) in a landscape dominated by field crops, and compare these results to those obtained by microscopic melissopalynology. • Pollen was collected from honey bee colonies in Madison County, Ohio, USA, during a two-week period in midspring and identified using microscopic methods and ITS2 metabarcoding. • Metabarcoding identified 19 plant families and exhibited sensitivity for identifying the taxa present in large and diverse pollen samples relative to microscopy, which identified eight families. The bulk of pollen collected by honey bees was from trees (Sapindaceae, Oleaceae, and Rosaceae), although dandelion (Taraxacum officinale) and mustard (Brassicaceae) pollen were also abundant. • For quantitative analysis of pollen, using both metabarcoding and microscopic identification is superior to either individual method. For qualitative analysis, ITS2 metabarcoding is superior, providing heightened sensitivity and genus-level resolution.
An autocorrelation method to detect low frequency earthquakes within tremor
Brown, J.R.; Beroza, G.C.; Shelly, D.R.
2008-01-01
Recent studies have shown that deep tremor in the Nankai Trough under western Shikoku consists of a swarm of low frequency earthquakes (LFEs) that occur as slow shear slip on the down-dip extension of the primary seismogenic zone of the plate interface. The similarity of tremor in other locations suggests a similar mechanism, but the absence of cataloged low frequency earthquakes prevents a similar analysis. In this study, we develop a method for identifying LFEs within tremor. The method employs a matched-filter algorithm, similar to the technique used to infer that tremor in parts of Shikoku is comprised of LFEs; however, in this case we do not assume the origin times or locations of any LFEs a priori. We search for LFEs using the running autocorrelation of tremor waveforms for 6 Hi-Net stations in the vicinity of the tremor source. Time lags showing strong similarity in the autocorrelation represent either repeats, or near repeats, of LFEs within the tremor. We test the method on an hour of Hi-Net recordings of tremor and demonstrates that it extracts both known and previously unidentified LFEs. Once identified, we cross correlate waveforms to measure relative arrival times and locate the LFEs. The results are able to explain most of the tremor as a swarm of LFEs and the locations of newly identified events appear to fill a gap in the spatial distribution of known LFEs. This method should allow us to extend the analysis of Shelly et al. (2007a) to parts of the Nankai Trough in Shikoku that have sparse LFE coverage, and may also allow us to extend our analysis to other regions that experience deep tremor, but where LFEs have not yet been identified. Copyright 2008 by the American Geophysical Union.
Hameed, Ahmed S; Modre-Osprian, Robert; Schreier, Günter
2017-01-01
Increasing treatment costs of HF patients affect the initiation of appropriate treatment method. Divergent approaches to measure the costs of treatment and the lack of common cost indicators impede the comparison of therapy settings. In the context of the present meta-analysis, key cost indicators from the perspective of healthcare providers are to be identified, described, analyzed and quantified. This review helps narrowing down the cost indicators, which have the most significant economic impact on the total treatment costs of HF patients. Telemedical services are to be compared to standard therapy methods. The identification process was based on several steps. For the quantitative synthesis, we used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. An additional set of criteria was defined for the following qualitative analysis. 5 key cost indicators were identified with significant economic impact on the treatment costs of HF patients. 95% of the reported treatment costs could be captured based on the identified cost indicators.
Luebker, Stephen A; Wojtkiewicz, Melinda; Koepsell, Scott A
2015-11-01
Formalin-fixed paraffin-embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC-MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in-solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre-analytical variations and analyzed with three technical replicates by LC-MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre-analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Zunker, Norma D.; Pearce, Daniel L.
2012-01-01
The first part of this study explored the significant works pertaining to the understanding of reading comprehension using a Modified Delphi Method. A panel of reading comprehension experts identified 19 works they considered to be significant to the understanding of reading comprehension. The panel of experts identified the reasons they…
A practical examination of RNA isolation methods for European pear (Pyrus communis)
USDA-ARS?s Scientific Manuscript database
With the goal of identifying fast, reliable and broadly applicable RNA isolation methods in European pear fruit for downstream transcriptome analysis, we evaluated several commercially available kit-based RNA isolations methods, plus our modified version of a published cetyl trimethyl ammonium bromi...
Physical-chemical property based sequence motifs and methods regarding same
Braun, Werner [Friendswood, TX; Mathura, Venkatarajan S [Sarasota, FL; Schein, Catherine H [Friendswood, TX
2008-09-09
A data analysis system, program, and/or method, e.g., a data mining/data exploration method, using physical-chemical property motifs. For example, a sequence database may be searched for identifying segments thereof having physical-chemical properties similar to the physical-chemical property motifs.
Moberg, Andreas; Hansson, Eva; Boyd, Helen
2014-01-01
Abstract With the public availability of biochemical assays and screening data constantly increasing, new applications for data mining and method analysis are evolving in parallel. One example is BioAssay Ontology (BAO) for systematic classification of assays based on screening setup and metadata annotations. In this article we report a high-throughput screening (HTS) against phospho-N-acetylmuramoyl-pentapeptide translocase (MraY), an attractive antibacterial drug target involved in peptidoglycan synthesis. The screen resulted in novel chemistry identification using a fluorescence resonance energy transfer assay. To address a subset of the false positive hits, a frequent hitter analysis was performed using an approach in which MraY hits were compared with hits from similar assays, previously used for HTS. The MraY assay was annotated according to BAO and three internal reference assays, using a similar assay design and detection technology, were identified. Analyzing the assays retrospectively, it was clear that both MraY and the three reference assays all showed a high false positive rate in the primary HTS assays. In the case of MraY, false positives were efficiently identified by applying a method to correct for compound interference at the hit-confirmation stage. Frequent hitter analysis based on the three reference assays with similar assay method identified additional false actives in the primary MraY assay as frequent hitters. This article demonstrates how assays annotated using BAO terms can be used to identify closely related reference assays, and that analysis based on these assays clearly can provide useful data to influence assay design, technology, and screening strategy. PMID:25415593
Evaluation of Parallel Analysis Methods for Determining the Number of Factors
ERIC Educational Resources Information Center
Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.
2010-01-01
Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…
NASA Technical Reports Server (NTRS)
Hejduk, M. D.
2016-01-01
Provide a response to MOWG action item 1410-01: Analyze close approaches which have required mission team action on short notice. Determine why the approaches were identified later in the process than most other events. Method: Performed an analysis to determine whether there is any correlation between late notice event identification and space weather, sparse tracking, or high drag objects, which would allow preventive action to be taken Examined specific late notice events identified by missions as problematic to try to identify root cause and attempt to relate them to the correlation analysis.
Clinical and Epidemiological Characteristics of Suicides Committed in Medellin, Colombia.
Ortega, Paula Andrea; Manrique, Ruben Darío; Tovilla Zarate, Carlos Alfonso; López Jaramillo, Carlos; Cuartas, Jorge Mauricio
2014-01-01
The purpose of this study was to identify the characteristics of individuals who committed suicide in Medellín between 2008 and 2010, and to identify variables related to the type of events. A retrospective and descriptive analysis was conducted on data provided by the National Institute of Legal Medicine and Forensic Sciences. In addition, a univariate and bivariate analysis was used to identify the sociodemographic and medical-legal characteristics of the deceased. Multiple correspondence analysis was also used in order to establish typologies. The information was analyzed using STATA 11.0. Of the 389 cases occurring between 2008 and 2010, 84.6% (n=329) were men. The male to female ratio was 5:1; 64% of the cases occurred in people aged 18-45 years; 6.7% occurred in children under 18, with hanging being the method most chosen by the victims (48.3%). Exploratory analysis was used to identify a possible association between the use of violent methods and events occurring in the housing and social strata 1, 2 and 3. Some factors could be associated with suicide, providing data that could consolidate health intervention strategies in our population. Copyright © 2013 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Harst, Lorenz; Timpel, Patrick; Otto, Lena; Wollschlaeger, Bastian; Richter, Peggy; Schlieter, Hannes
2018-01-01
This paper presents an approach for an evaluation of finished telemedicine projects using qualitative methods. Telemedicine applications are said to improve the performance of health care systems. While there are countless telemedicine projects, the vast majority never makes the threshold from testing to implementation and diffusion. Projects were collected from German project databases in the area of telemedicine following systematically developed criteria. In a testing phase, ten projects were subject to a qualitative content analysis to identify limitations, need for further research, and lessons learned. Using Mayring's method of inductive category development, six categories of possible future research were derived. Thus, the proposed method is an important contribution to diffusion and translation research regarding telemedicine, as it is applicable to a systematic research of databases.
NASA Astrophysics Data System (ADS)
Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal
2013-07-01
The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.
NASA Astrophysics Data System (ADS)
Dimova, Dilyana; Bajorath, Jürgen
2017-07-01
Computational scaffold hopping aims to identify core structure replacements in active compounds. To evaluate scaffold hopping potential from a principal point of view, regardless of the computational methods that are applied, a global analysis of conventional scaffolds in analog series from compound activity classes was carried out. The majority of analog series was found to contain multiple scaffolds, thus enabling the detection of intra-series scaffold hops among closely related compounds. More than 1000 activity classes were found to contain increasing proportions of multi-scaffold analog series. Thus, using such activity classes for scaffold hopping analysis is likely to overestimate the scaffold hopping (core structure replacement) potential of computational methods, due to an abundance of artificial scaffold hops that are possible within analog series.
Linguistic methodology for the analysis of aviation accidents
NASA Technical Reports Server (NTRS)
Goguen, J. A.; Linde, C.
1983-01-01
A linguistic method for the analysis of small group discourse, was developed and the use of this method on transcripts of commercial air transpot accidents is demonstrated. The method identifies the discourse types that occur and determine their linguistic structure; it identifies significant linguistic variables based upon these structures or other linguistic concepts such as speech act and topic; it tests hypotheses that support significance and reliability of these variables; and it indicates the implications of the validated hypotheses. These implications fall into three categories: (1) to train crews to use more nearly optimal communication patterns; (2) to use linguistic variables as indices for aspects of crew performance such as attention; and (3) to provide guidelines for the design of aviation procedures and equipment, especially those that involve speech.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Identification Method of Mud Shale Fractures Base on Wavelet Transform
NASA Astrophysics Data System (ADS)
Xia, Weixu; Lai, Fuqiang; Luo, Han
2018-01-01
In recent years, inspired by seismic analysis technology, a new method for analysing mud shale fractures oil and gas reservoirs by logging properties has emerged. By extracting the high frequency attribute of the wavelet transform in the logging attribute, the formation information hidden in the logging signal is extracted, identified the fractures that are not recognized by conventional logging and in the identified fracture segment to show the “cycle jump”, “high value”, “spike” and other response effect is more obvious. Finally formed a complete wavelet denoising method and wavelet high frequency identification fracture method.
Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm
NASA Astrophysics Data System (ADS)
Karaca, Yeliz; Cattani, Carlo
Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.
Liu, Qian-qian; Wang, Chun-yan; Shi, Xiao-feng; Li, Wen-dong; Luan, Xiao-ning; Hou, Shi-lin; Zhang, Jin-liang; Zheng, Rong-er
2012-04-01
In this paper, a new method was developed to differentiate the spill oil samples. The synchronous fluorescence spectra in the lower nonlinear concentration range of 10(-2) - 10(-1) g x L(-1) were collected to get training data base. Radial basis function artificial neural network (RBF-ANN) was used to identify the samples sets, along with principal component analysis (PCA) as the feature extraction method. The recognition rate of the closely-related oil source samples is 92%. All the results demonstrated that the proposed method could identify the crude oil samples effectively by just one synchronous spectrum of the spill oil sample. The method was supposed to be very suitable to the real-time spill oil identification, and can also be easily applied to the oil logging and the analysis of other multi-PAHs or multi-fluorescent mixtures.
Apel, William A.; Thompson, Vicki S; Lacey, Jeffrey A.; Gentillon, Cynthia A.
2016-08-09
A method for determining a plurality of proteins for discriminating and positively identifying an individual based from a biological sample. The method may include profiling a biological sample from a plurality of individuals against a protein array including a plurality of proteins. The protein array may include proteins attached to a support in a preselected pattern such that locations of the proteins are known. The biological sample may be contacted with the protein array such that a portion of antibodies in the biological sample reacts with and binds to the proteins forming immune complexes. A statistical analysis method, such as discriminant analysis, may be performed to determine discriminating proteins for distinguishing individuals. Proteins of interest may be used to form a protein array. Such a protein array may be used, for example, to compare a forensic sample from an unknown source with a sample from a known source.
Thompson, Vicki S; Lacey, Jeffrey A; Gentillon, Cynthia A; Apel, William A
2015-03-03
A method for determining a plurality of proteins for discriminating and positively identifying an individual based from a biological sample. The method may include profiling a biological sample from a plurality of individuals against a protein array including a plurality of proteins. The protein array may include proteins attached to a support in a preselected pattern such that locations of the proteins are known. The biological sample may be contacted with the protein array such that a portion of antibodies in the biological sample reacts with and binds to the proteins forming immune complexes. A statistical analysis method, such as discriminant analysis, may be performed to determine discriminating proteins for distinguishing individuals. Proteins of interest may be used to form a protein array. Such a protein array may be used, for example, to compare a forensic sample from an unknown source with a sample from a known source.
Modeling and observations of an elevated, moving infrasonic source: Eigenray methods.
Blom, Philip; Waxler, Roger
2017-04-01
The acoustic ray tracing relations are extended by the inclusion of auxiliary parameters describing variations in the spatial ray coordinates and eikonal vector due to changes in the initial conditions. Computation of these parameters allows one to define the geometric spreading factor along individual ray paths and assists in identification of caustic surfaces so that phase shifts can be easily identified. A method is developed leveraging the auxiliary parameters to identify propagation paths connecting specific source-receiver geometries, termed eigenrays. The newly introduced method is found to be highly efficient in cases where propagation is non-planar due to horizontal variations in the propagation medium or the presence of cross winds. The eigenray method is utilized in analysis of infrasonic signals produced by a multi-stage sounding rocket launch with promising results for applications of tracking aeroacoustic sources in the atmosphere and specifically to analysis of motor performance during dynamic tests.
OKINO, Cintia Hiromi; MONTASSIER, Maria de Fátima Silva; de OLIVEIRA, Andressa Peres; MONTASSIER, Helio José
2018-01-01
A method based on Melting Temperature analysis of Hypervariable regions (HVR) of S1 gene within a RT-qPCR was developed to detect different genotypes of avian infectious bronchitis virus (IBV) and identify the Mass genotype. The method was able to rapidly identify the Mass genotype among IBV field isolates, vaccine attenuated strains and reference M41 strain in allantoic liquid and also directly in tissues. The RT-qPCR developed detected the virus in both tracheal and pulmonary samples from M41-infected or H120-infected birds, in a larger post-infection period compared to detection by standard method of virus isolation. RT-qPCR method tested provided a sensitivity and rapid approach for screening on IBV detection and Mass genotyping from IBV isolates. PMID:29491226
Recent Applications of Higher-Order Spectral Analysis to Nonlinear Aeroelastic Phenomena
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Hajj, Muhammad R.; Dunn, Shane; Strganac, Thomas W.; Powers, Edward J.; Stearman, Ronald
2005-01-01
Recent applications of higher-order spectral (HOS) methods to nonlinear aeroelastic phenomena are presented. Applications include the analysis of data from a simulated nonlinear pitch and plunge apparatus and from F-18 flight flutter tests. A MATLAB model of the Texas A&MUniversity s Nonlinear Aeroelastic Testbed Apparatus (NATA) is used to generate aeroelastic transients at various conditions including limit cycle oscillations (LCO). The Gaussian or non-Gaussian nature of the transients is investigated, related to HOS methods, and used to identify levels of increasing nonlinear aeroelastic response. Royal Australian Air Force (RAAF) F/A-18 flight flutter test data is presented and analyzed. The data includes high-quality measurements of forced responses and LCO phenomena. Standard power spectral density (PSD) techniques and HOS methods are applied to the data and presented. The goal of this research is to develop methods that can identify the onset of nonlinear aeroelastic phenomena, such as LCO, during flutter testing.
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
Zehner, R; Zimmermann, S; Mebs, D
1998-01-01
To identify common animal species by analysis of the cytochrome b gene a method has been developed to obtain PCR products of a large domain of the cytochrome b gene (981 bp out of 1140 bp) in humans, selected mammals and birds using the same specifically designed primers. Species-specific RFLP patterns are generated by co-restriction with the restriction endonucleases ALU I and NCO I. The RFLP patterns obtained are conclusive even in mixtures of two or more species. The results were confirmed by sequence analysis which in addition explained intraspecies variations in the RFLP patterns. The method has been applied to forensic casework studies where the origin of roasted meat, stomach contents and a bone sample has been successfully identified.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Using cluster ensemble and validation to identify subtypes of pervasive developmental disorders.
Shen, Jess J; Lee, Phil-Hyoun; Holden, Jeanette J A; Shatkay, Hagit
2007-10-11
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior. Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.
Using Cluster Ensemble and Validation to Identify Subtypes of Pervasive Developmental Disorders
Shen, Jess J.; Lee, Phil Hyoun; Holden, Jeanette J.A.; Shatkay, Hagit
2007-01-01
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior.1 Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes19. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.2 PMID:18693920
The Use of Propensity Scores in Mediation Analysis
ERIC Educational Resources Information Center
Jo, Booil; Stuart, Elizabeth A.; MacKinnon, David P.; Vinokur, Amiram D.
2011-01-01
Mediation analysis uses measures of hypothesized mediating variables to test theory for how a treatment achieves effects on outcomes and to improve subsequent treatments by identifying the most efficient treatment components. Most current mediation analysis methods rely on untested distributional and functional form assumptions for valid…
Exploring patterns enriched in a dataset with contrastive principal component analysis.
Abid, Abubakar; Zhang, Martin J; Bagaria, Vivek K; Zou, James
2018-05-30
Visualization and exploration of high-dimensional data is a ubiquitous challenge across disciplines. Widely used techniques such as principal component analysis (PCA) aim to identify dominant trends in one dataset. However, in many settings we have datasets collected under different conditions, e.g., a treatment and a control experiment, and we are interested in visualizing and exploring patterns that are specific to one dataset. This paper proposes a method, contrastive principal component analysis (cPCA), which identifies low-dimensional structures that are enriched in a dataset relative to comparison data. In a wide variety of experiments, we demonstrate that cPCA with a background dataset enables us to visualize dataset-specific patterns missed by PCA and other standard methods. We further provide a geometric interpretation of cPCA and strong mathematical guarantees. An implementation of cPCA is publicly available, and can be used for exploratory data analysis in many applications where PCA is currently used.
Analysis of the sleep quality of elderly people using biomedical signals.
Moreno-Alsasua, L; Garcia-Zapirain, B; Mendez-Zorrilla, A
2015-01-01
This paper presents a technical solution that analyses sleep signals captured by biomedical sensors to find possible disorders during rest. Specifically, the method evaluates electrooculogram (EOG) signals, skin conductance (GSR), air flow (AS), and body temperature. Next, a quantitative sleep quality analysis determines significant changes in the biological signals, and any similarities between them in a given time period. Filtering techniques such as the Fourier transform method and IIR filters process the signal and identify significant variations. Once these changes have been identified, all significant data is compared and a quantitative and statistical analysis is carried out to determine the level of a person's rest. To evaluate the correlation and significant differences, a statistical analysis has been calculated showing correlation between EOG and AS signals (p=0,005), EOG, and GSR signals (p=0,037) and, finally, the EOG and Body temperature (p=0,04). Doctors could use this information to monitor changes within a patient.
Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael
2018-01-01
Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. PMID:29589829
Hyde, Jonathan M; DaCosta, Gérald; Hatzoglou, Constantinos; Weekes, Hannah; Radiguet, Bertrand; Styman, Paul D; Vurpillot, Francois; Pareige, Cristelle; Etienne, Auriane; Bonny, Giovanni; Castin, Nicolas; Malerba, Lorenzo; Pareige, Philippe
2017-04-01
Irradiation of reactor pressure vessel (RPV) steels causes the formation of nanoscale microstructural features (termed radiation damage), which affect the mechanical properties of the vessel. A key tool for characterizing these nanoscale features is atom probe tomography (APT), due to its high spatial resolution and the ability to identify different chemical species in three dimensions. Microstructural observations using APT can underpin development of a mechanistic understanding of defect formation. However, with atom probe analyses there are currently multiple methods for analyzing the data. This can result in inconsistencies between results obtained from different researchers and unnecessary scatter when combining data from multiple sources. This makes interpretation of results more complex and calibration of radiation damage models challenging. In this work simulations of a range of different microstructures are used to directly compare different cluster analysis algorithms and identify their strengths and weaknesses.
Local-feature analysis for automated coarse-graining of bulk-polymer molecular dynamics simulations.
Xue, Y; Ludovice, P J; Grover, M A
2012-12-01
A method for automated coarse-graining of bulk polymers is presented, using the data-mining tool of local feature analysis. Most existing methods for polymer coarse-graining define superatoms based on their covalent bonding topology along the polymer backbone, but here superatoms are defined based only on their correlated motions, as observed in molecular dynamics simulations. Correlated atomic motions are identified in the simulation data using local feature analysis, between atoms in the same or in different polymer chains. Groups of highly correlated atoms constitute the superatoms in the coarse-graining scheme, and the positions of their seed coordinates are then projected forward in time. Based on only the seed positions, local feature analysis enables the full reconstruction of all atomic positions. This reconstruction suggests an iterative scheme to reduce the computation of the simulations to initialize another short molecular dynamic simulation, identify new superatoms, and again project forward in time.
Multispectral analysis of ocean dumped materials
NASA Technical Reports Server (NTRS)
Johnson, R. W.
1977-01-01
Remotely sensed data were collected in conjunction with sea-truth measurements in three experiments in the New York Bight. Pollution features of primary interest were ocean dumped materials, such as sewage sludge and acid waste. Sewage-sludge and acid-waste plumes, including plumes from sewage sludge dumped by the 'line-dump' and 'spot-dump' methods, were located, identified, and mapped. Previously developed quantitative analysis techniques for determining quantitative distributions of materials in sewage sludge dumps were evaluated, along with multispectral analysis techniques developed to identify ocean dumped materials. Results of these experiments and the associated data analysis investigations are presented and discussed.
Environmental analysis of higher brominated diphenyl ethers and decabromodiphenyl ethane.
Kierkegaard, Amelie; Sellström, Ulla; McLachlan, Michael S
2009-01-16
Methods for environmental analysis of higher brominated diphenyl ethers (PBDEs), in particular decabromodiphenyl ether (BDE209), and the recently discovered environmental contaminant decabromodiphenyl ethane (deBDethane) are reviewed. The extensive literature on analysis of BDE209 has identified several critical issues, including contamination of the sample, degradation of the analyte during sample preparation and GC analysis, and the selection of appropriate detection methods and surrogate standards. The limited experience with the analysis of deBDethane suggests that there are many commonalities with BDE209. The experience garnered from the analysis of BDE209 over the last 15 years will greatly facilitate progress in the analysis of deBDethane.
Analysis and methods of improvement of safety at high-speed rural intersections [technical summary].
DOT National Transportation Integrated Search
2012-01-01
INTRODUCTION: Since 2006, INDOT has been preparing an annual fivepercent : report that identifies intersections and segments : on Indiana state roads that require attention due to the : excessive number and severity of crashes. Many of the : identifi...
Analysis and methods of improvement of safety at high-speed rural intersections : appendix C.
DOT National Transportation Integrated Search
2012-04-01
Since 2006, INDOT has been preparing an annual fivepercent report that identifies intersections and segments on Indiana state roads that require attention due to the excessive number and severity of crashes. Many of the identified intersections ar...
Analysis and methods of improvement of safety at high-speed rural intersections : appendix A.
DOT National Transportation Integrated Search
2012-04-01
Since 2006, INDOT has been preparing an annual fivepercent report that identifies intersections and segments on Indiana state roads that require attention due to the excessive number and severity of crashes. Many of the identified intersections ar...
Analysis and methods of improvement of safety at high-speed rural intersections : appendix B.
DOT National Transportation Integrated Search
2012-04-01
Since 2006, INDOT has been preparing an annual fivepercent report that identifies intersections and segments on Indiana state roads that require attention due to the excessive number and severity of crashes. Many of the identified intersections ar...
Plis, Sergey M; Sui, Jing; Lane, Terran; Roy, Sushmita; Clark, Vincent P; Potluru, Vamsi K; Huster, Rene J; Michael, Andrew; Sponheim, Scott R; Weisend, Michael P; Calhoun, Vince D
2013-01-01
Identifying the complex activity relationships present in rich, modern neuroimaging data sets remains a key challenge for neuroscience. The problem is hard because (a) the underlying spatial and temporal networks may be nonlinear and multivariate and (b) the observed data may be driven by numerous latent factors. Further, modern experiments often produce data sets containing multiple stimulus contexts or tasks processed by the same subjects. Fusing such multi-session data sets may reveal additional structure, but raises further statistical challenges. We present a novel analysis method for extracting complex activity networks from such multifaceted imaging data sets. Compared to previous methods, we choose a new point in the trade-off space, sacrificing detailed generative probability models and explicit latent variable inference in order to achieve robust estimation of multivariate, nonlinear group factors (“network clusters”). We apply our method to identify relationships of task-specific intrinsic networks in schizophrenia patients and control subjects from a large fMRI study. After identifying network-clusters characterized by within- and between-task interactions, we find significant differences between patient and control groups in interaction strength among networks. Our results are consistent with known findings of brain regions exhibiting deviations in schizophrenic patients. However, we also find high-order, nonlinear interactions that discriminate groups but that are not detected by linear, pair-wise methods. We additionally identify high-order relationships that provide new insights into schizophrenia but that have not been found by traditional univariate or second-order methods. Overall, our approach can identify key relationships that are missed by existing analysis methods, without losing the ability to find relationships that are known to be important. PMID:23876245
Computerized analysis of sonograms for the detection of breast lesions
NASA Astrophysics Data System (ADS)
Drukker, Karen; Giger, Maryellen L.; Horsch, Karla; Vyborny, Carl J.
2002-05-01
With a renewed interest in using non-ionizing radiation for the screening of high risk women, there is a clear role for a computerized detection aid in ultrasound. Thus, we are developing a computerized detection method for the localization of lesions on breast ultrasound images. The computerized detection scheme utilizes two methods. Firstly, a radial gradient index analysis is used to distinguish potential lesions from normal parenchyma. Secondly, an image skewness analysis is performed to identify posterior acoustic shadowing. We analyzed 400 cases (757 images) consisting of complex cysts, solid benign lesions, and malignant lesions. The detection method yielded an overall sensitivity of 95% by image, and 99% by case at a false-positive rate of 0.94 per image. In 51% of all images, only the lesion itself was detected, while in 5% of the images only the shadowing was identified. For malignant lesions these numbers were 37% and 9%, respectively. In summary, we have developed a computer detection method for lesions on ultrasound images of the breast, which may ultimately aid in breast cancer screening.
Cancer Detection Using Neural Computing Methodology
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Kohen, Hamid S.; Bearman, Gregory H.; Seligson, David B.
2001-01-01
This paper describes a novel learning methodology used to analyze bio-materials. The premise of this research is to help pathologists quickly identify anomalous cells in a cost efficient method. Skilled pathologists must methodically, efficiently and carefully analyze manually histopathologic materials for the presence, amount and degree of malignancy and/or other disease states. The prolonged attention required to accomplish this task induces fatigue that may result in a higher rate of diagnostic errors. In addition, automated image analysis systems to date lack a sufficiently intelligent means of identifying even the most general regions of interest in tissue based studies and this shortfall greatly limits their utility. An intelligent data understanding system that could quickly and accurately identify diseased tissues and/or could choose regions of interest would be expected to increase the accuracy of diagnosis and usher in truly automated tissue based image analysis.
NASA Astrophysics Data System (ADS)
Manfredi, Marcello; Barberis, Elettra; Aceto, Maurizio; Marengo, Emilio
2017-06-01
During the last years the need for non-invasive and non-destructive analytical methods brought to the development and application of new instrumentation and analytical methods for the in-situ analysis of cultural heritage objects. In this work we present the application of a portable diffuse reflectance infrared Fourier transform (DRIFT) method for the non-invasive characterization of colorants prepared according to ancient recipes and using egg white and Gum Arabic as binders. Approximately 50 colorants were analyzed with the DRIFT spectroscopy: we were able to identify and discriminate the most used yellow (i.e. yellow ochres, Lead-tin Yellow, Orpiment, etc.), red (i.e. red ochres, Hematite) and blue (i.e. Lapis Lazuli, Azurite, indigo) colorants, creating a complete DRIFT spectral library. The Principal Component Analysis-Discriminant Analysis (PCA-DA) was then employed for the colorants classification according to the chemical/mineralogical composition. The DRIFT analysis was also performed on a gouache painting of the artist Sutherland; and the colorants used by the painter were identified directly in-situ and in a non-invasive manner.
Meyer, Bernd J.; Sellers, Jeffrey P.; Thomsen, Jan U.
1993-01-01
Apparatus and processes for recognizing and identifying materials. Characteristic spectra are obtained for the materials via spectroscopy techniques including nuclear magnetic resonance spectroscopy, infrared absorption analysis, x-ray analysis, mass spectroscopy and gas chromatography. Desired portions of the spectra may be selected and then placed in proper form and format for presentation to a number of input layer neurons in an offline neural network. The network is first trained according to a predetermined training process; it may then be employed to identify particular materials. Such apparatus and processes are particularly useful for recognizing and identifying organic compounds such as complex carbohydrates, whose spectra conventionally require a high level of training and many hours of hard work to identify, and are frequently indistinguishable from one another by human interpretation.
Du, Yushen; Wu, Nicholas C.; Jiang, Lin; Zhang, Tianhao; Gong, Danyang; Shu, Sara; Wu, Ting-Ting
2016-01-01
ABSTRACT Identification and annotation of functional residues are fundamental questions in protein sequence analysis. Sequence and structure conservation provides valuable information to tackle these questions. It is, however, limited by the incomplete sampling of sequence space in natural evolution. Moreover, proteins often have multiple functions, with overlapping sequences that present challenges to accurate annotation of the exact functions of individual residues by conservation-based methods. Using the influenza A virus PB1 protein as an example, we developed a method to systematically identify and annotate functional residues. We used saturation mutagenesis and high-throughput sequencing to measure the replication capacity of single nucleotide mutations across the entire PB1 protein. After predicting protein stability upon mutations, we identified functional PB1 residues that are essential for viral replication. To further annotate the functional residues important to the canonical or noncanonical functions of viral RNA-dependent RNA polymerase (vRdRp), we performed a homologous-structure analysis with 16 different vRdRp structures. We achieved high sensitivity in annotating the known canonical polymerase functional residues. Moreover, we identified a cluster of noncanonical functional residues located in the loop region of the PB1 β-ribbon. We further demonstrated that these residues were important for PB1 protein nuclear import through the interaction with Ran-binding protein 5. In summary, we developed a systematic and sensitive method to identify and annotate functional residues that are not restrained by sequence conservation. Importantly, this method is generally applicable to other proteins about which homologous-structure information is available. PMID:27803181
Mohler, Rachel E; Dombek, Kenneth M; Hoggard, Jamin C; Pierce, Karisa M; Young, Elton T; Synovec, Robert E
2007-08-01
The first extensive study of yeast metabolite GC x GC-TOFMS data from cells grown under fermenting, R, and respiring, DR, conditions is reported. In this study, recently developed chemometric software for use with three-dimensional instrumentation data was implemented, using a statistically-based Fisher ratio method. The Fisher ratio method is fully automated and will rapidly reduce the data to pinpoint two-dimensional chromatographic peaks differentiating sample types while utilizing all the mass channels. The effect of lowering the Fisher ratio threshold on peak identification was studied. At the lowest threshold (just above the noise level), 73 metabolite peaks were identified, nearly three-fold greater than the number of previously reported metabolite peaks identified (26). In addition to the 73 identified metabolites, 81 unknown metabolites were also located. A Parallel Factor Analysis graphical user interface (PARAFAC GUI) was applied to selected mass channels to obtain a concentration ratio, for each metabolite under the two growth conditions. Of the 73 known metabolites identified by the Fisher ratio method, 54 were statistically changing to the 95% confidence limit between the DR and R conditions according to the rigorous Student's t-test. PARAFAC determined the concentration ratio and provided a fully-deconvoluted (i.e. mathematically resolved) mass spectrum for each of the metabolites. The combination of the Fisher ratio method with the PARAFAC GUI provides high-throughput software for discovery-based metabolomics research, and is novel for GC x GC-TOFMS data due to the use of the entire data set in the analysis (640 MB x 70 runs, double precision floating point).
Folded concave penalized learning in identifying multimodal MRI marker for Parkinson’s disease
Liu, Hongcheng; Du, Guangwei; Zhang, Lijun; Lewis, Mechelle M.; Wang, Xue; Yao, Tao; Li, Runze; Huang, Xuemei
2016-01-01
Background Brain MRI holds promise to gauge different aspects of Parkinson’s disease (PD)-related pathological changes. Its analysis, however, is hindered by the high-dimensional nature of the data. New method This study introduces folded concave penalized (FCP) sparse logistic regression to identify biomarkers for PD from a large number of potential factors. The proposed statistical procedures target the challenges of high-dimensionality with limited data samples acquired. The maximization problem associated with the sparse logistic regression model is solved by local linear approximation. The proposed procedures then are applied to the empirical analysis of multimodal MRI data. Results From 45 features, the proposed approach identified 15 MRI markers and the UPSIT, which are known to be clinically relevant to PD. By combining the MRI and clinical markers, we can enhance substantially the specificity and sensitivity of the model, as indicated by the ROC curves. Comparison to existing methods We compare the folded concave penalized learning scheme with both the Lasso penalized scheme and the principle component analysis-based feature selection (PCA) in the Parkinson’s biomarker identification problem that takes into account both the clinical features and MRI markers. The folded concave penalty method demonstrates a substantially better clinical potential than both the Lasso and PCA in terms of specificity and sensitivity. Conclusions For the first time, we applied the FCP learning method to MRI biomarker discovery in PD. The proposed approach successfully identified MRI markers that are clinically relevant. Combining these biomarkers with clinical features can substantially enhance performance. PMID:27102045
Comparing direct and iterative equation solvers in a large structural analysis software system
NASA Technical Reports Server (NTRS)
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
Quantitative Analysis of Qualitative Information from Interviews: A Systematic Literature Review
ERIC Educational Resources Information Center
Fakis, Apostolos; Hilliam, Rachel; Stoneley, Helen; Townend, Michael
2014-01-01
Background: A systematic literature review was conducted on mixed methods area. Objectives: The overall aim was to explore how qualitative information from interviews has been analyzed using quantitative methods. Methods: A contemporary review was undertaken and based on a predefined protocol. The references were identified using inclusion and…
Use of direct gradient analysis to uncover biological hypotheses in 16s survey data and beyond.
Erb-Downward, John R; Sadighi Akha, Amir A; Wang, Juan; Shen, Ning; He, Bei; Martinez, Fernando J; Gyetko, Margaret R; Curtis, Jeffrey L; Huffnagle, Gary B
2012-01-01
This study investigated the use of direct gradient analysis of bacterial 16S pyrosequencing surveys to identify relevant bacterial community signals in the midst of a "noisy" background, and to facilitate hypothesis-testing both within and beyond the realm of ecological surveys. The results, utilizing 3 different real world data sets, demonstrate the utility of adding direct gradient analysis to any analysis that draws conclusions from indirect methods such as Principal Component Analysis (PCA) and Principal Coordinates Analysis (PCoA). Direct gradient analysis produces testable models, and can identify significant patterns in the midst of noisy data. Additionally, we demonstrate that direct gradient analysis can be used with other kinds of multivariate data sets, such as flow cytometric data, to identify differentially expressed populations. The results of this study demonstrate the utility of direct gradient analysis in microbial ecology and in other areas of research where large multivariate data sets are involved.
Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad
2016-01-01
Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162
NASA Astrophysics Data System (ADS)
Zhang, Fan; Liu, Pinkuan
2018-04-01
In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.
Erdeljić, Viktorija; Francetić, Igor; Bošnjak, Zrinka; Budimir, Ana; Kalenić, Smilja; Bielen, Luka; Makar-Aušperger, Ksenija; Likić, Robert
2011-05-01
The relationship between antibiotic consumption and selection of resistant strains has been studied mainly by employing conventional statistical methods. A time delay in effect must be anticipated and this has rarely been taken into account in previous studies. Therefore, distributed lags time series analysis and simple linear correlation were compared in their ability to evaluate this relationship. Data on monthly antibiotic consumption for ciprofloxacin, piperacillin/tazobactam, carbapenems and cefepime as well as Pseudomonas aeruginosa susceptibility were retrospectively collected for the period April 2006 to July 2007. Using distributed lags analysis, a significant temporal relationship was identified between ciprofloxacin, meropenem and cefepime consumption and the resistance rates of P. aeruginosa isolates to these antibiotics. This effect was lagged for ciprofloxacin and cefepime [1 month (R=0.827, P=0.039) and 2 months (R=0.962, P=0.001), respectively] and was simultaneous for meropenem (lag 0, R=0.876, P=0.002). Furthermore, a significant concomitant effect of meropenem consumption on the appearance of multidrug-resistant P. aeruginosa strains (resistant to three or more representatives of classes of antibiotics) was identified (lag 0, R=0.992, P<0.001). This effect was not delayed and it was therefore identified both by distributed lags analysis and the Pearson's correlation coefficient. Correlation coefficient analysis was not able to identify relationships between antibiotic consumption and bacterial resistance when the effect was delayed. These results indicate that the use of diverse statistical methods can yield significantly different results, thus leading to the introduction of possibly inappropriate infection control measures. Copyright © 2010 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.
Is It Feasible to Identify Natural Clusters of TSC-Associated Neuropsychiatric Disorders (TAND)?
Leclezio, Loren; Gardner-Lubbe, Sugnet; de Vries, Petrus J
2018-04-01
Tuberous sclerosis complex (TSC) is a genetic disorder with multisystem involvement. The lifetime prevalence of TSC-Associated Neuropsychiatric Disorders (TAND) is in the region of 90% in an apparently unique, individual pattern. This "uniqueness" poses significant challenges for diagnosis, psycho-education, and intervention planning. To date, no studies have explored whether there may be natural clusters of TAND. The purpose of this feasibility study was (1) to investigate the practicability of identifying natural TAND clusters, and (2) to identify appropriate multivariate data analysis techniques for larger-scale studies. TAND Checklist data were collected from 56 individuals with a clinical diagnosis of TSC (n = 20 from South Africa; n = 36 from Australia). Using R, the open-source statistical platform, mean squared contingency coefficients were calculated to produce a correlation matrix, and various cluster analyses and exploratory factor analysis were examined. Ward's method rendered six TAND clusters with good face validity and significant convergence with a six-factor exploratory factor analysis solution. The "bottom-up" data-driven strategies identified a "scholastic" cluster of TAND manifestations, an "autism spectrum disorder-like" cluster, a "dysregulated behavior" cluster, a "neuropsychological" cluster, a "hyperactive/impulsive" cluster, and a "mixed/mood" cluster. These feasibility results suggest that a combination of cluster analysis and exploratory factor analysis methods may be able to identify clinically meaningful natural TAND clusters. Findings require replication and expansion in larger dataset, and could include quantification of cluster or factor scores at an individual level. Copyright © 2018 Elsevier Inc. All rights reserved.
Xie, Xin-Ping; Xie, Yu-Feng; Wang, Hong-Qiang
2017-08-23
Large-scale accumulation of omics data poses a pressing challenge of integrative analysis of multiple data sets in bioinformatics. An open question of such integrative analysis is how to pinpoint consistent but subtle gene activity patterns across studies. Study heterogeneity needs to be addressed carefully for this goal. This paper proposes a regulation probability model-based meta-analysis, jGRP, for identifying differentially expressed genes (DEGs). The method integrates multiple transcriptomics data sets in a gene regulatory space instead of in a gene expression space, which makes it easy to capture and manage data heterogeneity across studies from different laboratories or platforms. Specifically, we transform gene expression profiles into a united gene regulation profile across studies by mathematically defining two gene regulation events between two conditions and estimating their occurring probabilities in a sample. Finally, a novel differential expression statistic is established based on the gene regulation profiles, realizing accurate and flexible identification of DEGs in gene regulation space. We evaluated the proposed method on simulation data and real-world cancer datasets and showed the effectiveness and efficiency of jGRP in identifying DEGs identification in the context of meta-analysis. Data heterogeneity largely influences the performance of meta-analysis of DEGs identification. Existing different meta-analysis methods were revealed to exhibit very different degrees of sensitivity to study heterogeneity. The proposed method, jGRP, can be a standalone tool due to its united framework and controllable way to deal with study heterogeneity.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Sun, Li-Li; Wang, Meng; Zhang, Hui-Jie; Liu, Ya-Nan; Ren, Xiao-Liang; Deng, Yan-Ru; Qi, Ai-Di
2018-01-01
Polygoni Multiflori Radix (PMR) is increasingly being used not just as a traditional herbal medicine but also as a popular functional food. In this study, multivariate chemometric methods and mass spectrometry were combined to analyze the ultra-high-performance liquid chromatograph (UPLC) fingerprints of PMR from six different geographical origins. A chemometric strategy based on multivariate curve resolution-alternating least squares (MCR-ALS) and three classification methods is proposed to analyze the UPLC fingerprints obtained. Common chromatographic problems, including the background contribution, baseline contribution, and peak overlap, were handled by the established MCR-ALS model. A total of 22 components were resolved. Moreover, relative species concentrations were obtained from the MCR-ALS model, which was used for multivariate classification analysis. Principal component analysis (PCA) and Ward's method have been applied to classify 72 PMR samples from six different geographical regions. The PCA score plot showed that the PMR samples fell into four clusters, which related to the geographical location and climate of the source areas. The results were then corroborated by Ward's method. In addition, according to the variance-weighted distance between cluster centers obtained from Ward's method, five components were identified as the most significant variables (chemical markers) for cluster discrimination. A counter-propagation artificial neural network has been applied to confirm and predict the effects of chemical markers on different samples. Finally, the five chemical markers were identified by UPLC-quadrupole time-of-flight mass spectrometer. Components 3, 12, 16, 18, and 19 were identified as 2,3,5,4'-tetrahydroxy-stilbene-2-O-β-d-glucoside, emodin-8-O-β-d-glucopyranoside, emodin-8-O-(6'-O-acetyl)-β-d-glucopyranoside, emodin, and physcion, respectively. In conclusion, the proposed method can be applied for the comprehensive analysis of natural samples. Copyright © 2016. Published by Elsevier B.V.
Brinberg, Miriam; Fosco, Gregory M; Ram, Nilam
2017-12-01
Family systems theorists have forwarded a set of theoretical principles meant to guide family scientists and practitioners in their conceptualization of patterns of family interaction-intra-family dynamics-that, over time, give rise to family and individual dysfunction and/or adaptation. In this article, we present an analytic approach that merges state space grid methods adapted from the dynamic systems literature with sequence analysis methods adapted from molecular biology into a "grid-sequence" method for studying inter-family differences in intra-family dynamics. Using dyadic data from 86 parent-adolescent dyads who provided up to 21 daily reports about connectedness, we illustrate how grid-sequence analysis can be used to identify a typology of intrafamily dynamics and to inform theory about how specific types of intrafamily dynamics contribute to adolescent behavior problems and family members' mental health. Methodologically, grid-sequence analysis extends the toolbox of techniques for analysis of family experience sampling and daily diary data. Substantively, we identify patterns of family level microdynamics that may serve as new markers of risk/protective factors and potential points for intervention in families. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Using link analysis to explore the impact of the physical environment on pharmacist tasks.
Lester, Corey A; Chui, Michelle A
2016-01-01
National community pharmacy organizations have been redesigning pharmacies to better facilitate direct patient care. However, evidence suggests that changing the physical layout of a pharmacy prior to understanding how the environment impacts pharmacists' work may not achieve the desired benefits. This study describes an objective method to understanding how the physical layout of the pharmacy may affect how pharmacists perform tasks. Link analysis is a systems engineering method used to describe the influence of the physical environment on task completion. This study used a secondary data set of field notes collected from 9 h of direct observation in one mass-merchandise community pharmacy in the U.S. State, Wisconsin. A node is an individual location in the environment. A link is the movement between two nodes. Tasks were inventoried and task themes identified. The mean, minimum, and maximum number of links needed to complete each task were then determined and used to construct a link table. A link diagram is a graphical display showing the links in conjunction with the physical layout of the pharmacy. A total of 92 unique tasks were identified resulting in 221 links. Tasks were sorted into five themes: patient care activities, insurance issues, verifying prescriptions, filling prescriptions, and other. Insurance issues required the greatest number of links with a mean of 4.75. Verifying prescriptions and performing patient care were the most commonly performed tasks with 36 and 30 unique task occurrences, respectively. Link analysis provides an objective method for identifying how a pharmacist interacts with the physical environment to complete tasks. This method provides designers with useful information to target interventions to improve the effectiveness of pharmacist work. Analysis beyond link analysis should be considered for large scale system redesign. Copyright © 2015 Elsevier Inc. All rights reserved.
Arias, María Luisa Flores; Champion, Jane Dimmitt; Soto, Norma Elva Sáenz
2017-08-01
Development of a Spanish Version Contraceptive Self-efficacy Scale for use among heterosexual Mexican populations of reproductive age inclusive of 18-35years. Methods of family planning have decreased in Mexico which may lead to an increase in unintended pregnancies. Contraceptive self-efficacy is considered a predictor and precursor for use of family planning methods. Cross-sectional, descriptive study design was used to assess contraceptive self-efficacy among a heterosexual Mexican population (N=160) of reproductive age (18-35years). Adaptation of a Spanish Version Contraceptive Self-efficacy scale was conducted prior to instrument administration. Exploratory and confirmatory factorial analyses identified seven factors with a variance of 72.812%. The adapted scale had a Cronbach alpha of 0.771. A significant correlation between the Spanish Version Contraceptive Self-efficacy Scale and the use of family planning methods was identified. The Spanish Version Contraceptive Self-efficacy scale has an acceptable Cronbach alpha. Exploratory factor analysis identified 7 components. A positive correlation between self-reported contraceptive self-efficacy and family planning method use was identified. This scale may be used among heterosexual Mexican men and women of reproductive age. The factor analysis (7 factors versus 4 factors for the original scale) identified a discrepancy for interpretation of the Spanish versus English language versions. Interpretation of findings obtained via the Spanish versión among heterosexual Mexican men and women of reproductive age require interpretation based upon these differences identified in these analyses. Copyright © 2017 Elsevier Inc. All rights reserved.
N-Nitrosodimethylamine (NDMA) is a probable human carcinogen that has been identified as a drinking water contaminant of concern. United States Environmental Protection Agency (USEPA) Method 521 has been developed for the analysis of NDMA and six additional N-nitrosamines in dri...
ERIC Educational Resources Information Center
Hwang, Heungsun; Montreal, Hec; Dillon, William R.; Takane, Yoshio
2006-01-01
An extension of multiple correspondence analysis is proposed that takes into account cluster-level heterogeneity in respondents' preferences/choices. The method involves combining multiple correspondence analysis and k-means in a unified framework. The former is used for uncovering a low-dimensional space of multivariate categorical variables…
Detecting Outliers in Factor Analysis Using the Forward Search Algorithm
ERIC Educational Resources Information Center
Mavridis, Dimitris; Moustaki, Irini
2008-01-01
In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…
Failla, A J; Vasquez, A A; Hudson, P; Fujimoto, M; Ram, J L
2016-02-01
Establishing reliable methods for the identification of benthic chironomid communities is important due to their significant contribution to biomass, ecology and the aquatic food web. Immature larval specimens are more difficult to identify to species level by traditional morphological methods than their fully developed adult counterparts, and few keys are available to identify the larval species. In order to develop molecular criteria to identify species of chironomid larvae, larval and adult chironomids from Western Lake Erie were subjected to both molecular and morphological taxonomic analysis. Mitochondrial cytochrome c oxidase I (COI) barcode sequences of 33 adults that were identified to species level by morphological methods were grouped with COI sequences of 189 larvae in a neighbor-joining taxon-ID tree. Most of these larvae could be identified only to genus level by morphological taxonomy (only 22 of the 189 sequenced larvae could be identified to species level). The taxon-ID tree of larval sequences had 45 operational taxonomic units (OTUs, defined as clusters with >97% identity or individual sequences differing from nearest neighbors by >3%; supported by analysis of all larval pairwise differences), of which seven could be identified to species or 'species group' level by larval morphology. Reference sequences from the GenBank and BOLD databases assigned six larval OTUs with presumptive species level identifications and confirmed one previously assigned species level identification. Sequences from morphologically identified adults in the present study grouped with and further classified the identity of 13 larval OTUs. The use of morphological identification and subsequent DNA barcoding of adult chironomids proved to be beneficial in revealing possible species level identifications of larval specimens. Sequence data from this study also contribute to currently inadequate public databases relevant to the Great Lakes region, while the neighbor-joining analysis reported here describes the application and confirmation of a useful tool that can accelerate identification and bioassessment of chironomid communities.
Failla, Andrew Joseph; Vasquez, Adrian Amelio; Hudson, Patrick L.; Fujimoto, Masanori; Ram, Jeffrey L.
2016-01-01
Establishing reliable methods for the identification of benthic chironomid communities is important due to their significant contribution to biomass, ecology and the aquatic food web. Immature larval specimens are more difficult to identify to species level by traditional morphological methods than their fully developed adult counterparts, and few keys are available to identify the larval species. In order to develop molecular criteria to identify species of chironomid larvae, larval and adult chironomids from Western Lake Erie were subjected to both molecular and morphological taxonomic analysis. Mitochondrial cytochrome c oxidase I (COI) barcode sequences of 33 adults that were identified to species level by morphological methods were grouped with COI sequences of 189 larvae in a neighbor-joining taxon-ID tree. Most of these larvae could be identified only to genus level by morphological taxonomy (only 22 of the 189 sequenced larvae could be identified to species level). The taxon-ID tree of larval sequences had 45 operational taxonomic units (OTUs, defined as clusters with >97% identity or individual sequences differing from nearest neighbors by >3%; supported by analysis of all larval pairwise differences), of which seven could be identified to species or ‘species group’ level by larval morphology. Reference sequences from the GenBank and BOLD databases assigned six larval OTUs with presumptive species level identifications and confirmed one previously assigned species level identification. Sequences from morphologically identified adults in the present study grouped with and further classified the identity of 13 larval OTUs. The use of morphological identification and subsequent DNA barcoding of adult chironomids proved to be beneficial in revealing possible species level identifications of larval specimens. Sequence data from this study also contribute to currently inadequate public databases relevant to the Great Lakes region, while the neighbor-joining analysis reported here describes the application and confirmation of a useful tool that can accelerate identification and bioassesment of chironomid communities.
Ibáñez-Vea, María; Huang, Honggang; Martínez de Morentin, Xabier; Pérez, Estela; Gato, Maria; Zuazo, Miren; Arasanz, Hugo; Fernández-Irigoyen, Joaquin; Santamaría, Enrique; Fernandez-Hinojal, Gonzalo; Larsen, Martin R; Escors, David; Kochan, Grazyna
2018-03-02
Protein S-nitrosylation is a cysteine post-translational modification mediated by nitric oxide. An increasing number of studies highlight S-nitrosylation as an important regulator of signaling involved in numerous cellular processes. Despite the significant progress in the development of redox proteomic methods, identification and quantification of endogeneous S-nitrosylation using high-throughput mass-spectrometry-based methods is a technical challenge because this modification is highly labile. To overcome this drawback, most methods induce S-nitrosylation chemically in proteins using nitrosylating compounds before analysis, with the risk of introducing nonphysiological S-nitrosylation. Here we present a novel method to efficiently identify endogenous S-nitrosopeptides in the macrophage total proteome. Our approach is based on the labeling of S-nitrosopeptides reduced by ascorbate with a cysteine specific phosphonate adaptable tag (CysPAT), followed by titanium dioxide (TiO 2 ) chromatography enrichment prior to nLC-MS/MS analysis. To test our procedure, we performed a large-scale analysis of this low-abundant modification in a murine macrophage cell line. We identified 569 endogeneous S-nitrosylated proteins compared with 795 following exogenous chemically induced S-nitrosylation. Importantly, we discovered 579 novel S-nitrosylation sites. The large number of identified endogenous S-nitrosylated peptides allowed the definition of two S-nitrosylation consensus sites, highlighting protein translation and redox processes as key S-nitrosylation targets in macrophages.
Automated classification and quantitative analysis of arterial and venous vessels in fundus images
NASA Astrophysics Data System (ADS)
Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng
2018-02-01
It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul
2016-01-01
Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Li, Chunquan; Han, Junwei; Yao, Qianlan; Zou, Chendan; Xu, Yanjun; Zhang, Chunlong; Shang, Desi; Zhou, Lingyun; Zou, Chaoxia; Sun, Zeguo; Li, Jing; Zhang, Yunpeng; Yang, Haixiu; Gao, Xu; Li, Xia
2013-05-01
Various 'omics' technologies, including microarrays and gas chromatography mass spectrometry, can be used to identify hundreds of interesting genes, proteins and metabolites, such as differential genes, proteins and metabolites associated with diseases. Identifying metabolic pathways has become an invaluable aid to understanding the genes and metabolites associated with studying conditions. However, the classical methods used to identify pathways fail to accurately consider joint power of interesting gene/metabolite and the key regions impacted by them within metabolic pathways. In this study, we propose a powerful analytical method referred to as Subpathway-GM for the identification of metabolic subpathways. This provides a more accurate level of pathway analysis by integrating information from genes and metabolites, and their positions and cascade regions within the given pathway. We analyzed two colorectal cancer and one metastatic prostate cancer data sets and demonstrated that Subpathway-GM was able to identify disease-relevant subpathways whose corresponding entire pathways might be ignored using classical entire pathway identification methods. Further analysis indicated that the power of a joint genes/metabolites and subpathway strategy based on their topologies may play a key role in reliably recalling disease-relevant subpathways and finding novel subpathways.
NASA Technical Reports Server (NTRS)
Rampe, E. B.; Lanza, N. L.
2012-01-01
Orbital near-infrared (NIR) reflectance spectra of the martian surface from the OMEGA and CRISM instruments have identified a variety of phyllosilicates in Noachian terrains. The types of phyllosilicates present on Mars have important implications for the aqueous environments in which they formed, and, thus, for recognizing locales that may have been habitable. Current identifications of phyllosilicates from martian NIR data are based on the positions of spectral absorptions relative to laboratory data of well-characterized samples and from spectral ratios; however, some phyllosilicates can be difficult to distinguish from one another with these methods (i.e. illite vs. muscovite). Here we employ a multivariate statistical technique, principal component analysis (PCA), to differentiate between spectrally similar phyllosilicate minerals. PCA is commonly used in a variety of industries (pharmaceutical, agricultural, viticultural) to discriminate between samples. Previous work using PCA to analyze raw NIR reflectance data from mineral mixtures has shown that this is a viable technique for identifying mineral types, abundances, and particle sizes. Here, we evaluate PCA of second-derivative NIR reflectance data as a method for classifying phyllosilicates and test whether this method can be used to identify phyllosilicates on Mars.
Law, Emily F.; Beals-Erickson, Sarah E.; Fisher, Emma; Lang, Emily A.; Palermo, Tonya M.
2017-01-01
Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache. PMID:29503787
Law, Emily F; Beals-Erickson, Sarah E; Fisher, Emma; Lang, Emily A; Palermo, Tonya M
2017-01-01
Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache.
Traditional and Cognitive Job Analyses as Tools for Understanding the Skills Gap.
ERIC Educational Resources Information Center
Hanser, Lawrence M.
Traditional methods of job and task analysis may be categorized as worker-oriented methods focusing on general human behaviors performed by workers in jobs or as job-oriented methods focusing on the technologies involved in jobs. The ability of both types of traditional methods to identify, understand, and communicate the skills needed in high…
Rapin, Nicolas; Bagger, Frederik Otzen; Jendholm, Johan; Mora-Jensen, Helena; Krogh, Anders; Kohlmann, Alexander; Thiede, Christian; Borregaard, Niels; Bullinger, Lars; Winther, Ole; Theilgaard-Mönch, Kim; Porse, Bo T
2014-02-06
Gene expression profiling has been used extensively to characterize cancer, identify novel subtypes, and improve patient stratification. However, it has largely failed to identify transcriptional programs that differ between cancer and corresponding normal cells and has not been efficient in identifying expression changes fundamental to disease etiology. Here we present a method that facilitates the comparison of any cancer sample to its nearest normal cellular counterpart, using acute myeloid leukemia (AML) as a model. We first generated a gene expression-based landscape of the normal hematopoietic hierarchy, using expression profiles from normal stem/progenitor cells, and next mapped the AML patient samples to this landscape. This allowed us to identify the closest normal counterpart of individual AML samples and determine gene expression changes between cancer and normal. We find the cancer vs normal method (CvN method) to be superior to conventional methods in stratifying AML patients with aberrant karyotype and in identifying common aberrant transcriptional programs with potential importance for AML etiology. Moreover, the CvN method uncovered a novel poor-outcome subtype of normal-karyotype AML, which allowed for the generation of a highly prognostic survival signature. Collectively, our CvN method holds great potential as a tool for the analysis of gene expression profiles of cancer patients.
Static analysis of class invariants in Java programs
NASA Astrophysics Data System (ADS)
Bonilla-Quintero, Lidia Dionisia
2011-12-01
This paper presents a technique for the automatic inference of class invariants from Java bytecode. Class invariants are very important for both compiler optimization and as an aid to programmers in their efforts to reduce the number of software defects. We present the original DC-invariant analysis from Adam Webber, talk about its shortcomings and suggest several different ways to improve it. To apply the DC-invariant analysis to identify DC-invariant assertions, all that one needs is a monotonic method analysis function and a suitable assertion domain. The DC-invariant algorithm is very general; however, the method analysis can be highly tuned to the problem in hand. For example, one could choose shape analysis as the method analysis function and use the DC-invariant analysis to simply extend it to an analysis that would yield class-wide invariants describing the shapes of linked data structures. We have a prototype implementation: a system we refer to as "the analyzer" that infers DC-invariant unary and binary relations and provides them to the user in a human readable format. The analyzer uses those relations to identify unnecessary array bounds checks in Java programs and perform null-reference analysis. It uses Adam Webber's relational constraint technique for the class-invariant binary relations. Early results with the analyzer were very imprecise in the presence of "dirty-called" methods. A dirty-called method is one that is called, either directly or transitively, from any constructor of the class, or from any method of the class at a point at which a disciplined field has been altered. This result was unexpected and forced an extensive search for improved techniques. An important contribution of this paper is the suggestion of several ways to improve the results by changing the way dirty-called methods are handled. The new techniques expand the set of class invariants that can be inferred over Webber's original results. The technique that produces better results uses in-line analysis. Final results are promising: we can infer sound class invariants for full-scale, not just toy applications.
Vidyasagar, Mathukumalli
2015-01-01
This article reviews several techniques from machine learning that can be used to study the problem of identifying a small number of features, from among tens of thousands of measured features, that can accurately predict a drug response. Prediction problems are divided into two categories: sparse classification and sparse regression. In classification, the clinical parameter to be predicted is binary, whereas in regression, the parameter is a real number. Well-known methods for both classes of problems are briefly discussed. These include the SVM (support vector machine) for classification and various algorithms such as ridge regression, LASSO (least absolute shrinkage and selection operator), and EN (elastic net) for regression. In addition, several well-established methods that do not directly fall into machine learning theory are also reviewed, including neural networks, PAM (pattern analysis for microarrays), SAM (significance analysis for microarrays), GSEA (gene set enrichment analysis), and k-means clustering. Several references indicative of the application of these methods to cancer biology are discussed.
Identification of sulfur fumed Pinelliae Rhizoma using an electronic nose
Zhou, Xia; Wan, Jun; Chu, Liang; Liu, Wengang; Jing, Yafeng; Wu, Chunjie
2014-01-01
Background: Pinelliae Rhizoma is a commonly used Chinese herb which will change brown during the natural drying process. However, sulfur fumed Pinelliae Rhizoma will get a better appearance than naturally dried one. Sulfur fumed Pinelliae Rhizoma is potentially toxical due to sulfur dioxide and sulfites formed during the fuming procedures. The odor components in sulfur fumed Pinelliae Rhizoma is complex. At present, there is no analytical method available to determine sulfur fumed Pinelliae Rhizoma simply and rapidly. To ensure medication safety, it is highly desirable to have an effective and simple method to identify sulfur fumed Pinelliae Rhizoma. Materials and Methods: This paper presents a novel approach using an electronic nose based on metal oxide sensors to identify whether Pinelliae Rhizoma was fumed with sulfur, and to predict the fuming degree of Pinelliae Rhizoma. Multivariate statistical methods such as principal components analysis (PCA), discriminant factorial analysis (DFA) and partial least squares (PLS) were used for data analyzing and identification. The use of the electronic nose to discriminate between different fuming degrees Pinelliae Rhizoma and naturally dried Pinelliae Rhizoma was demonstrated. Results: The electronic nose was also successfully applied to identify unknown samples including sulfur fumed samples and naturally dried samples, high recognition value was obtained. Quantitative analysis of fuming degree of Pinelliae Rhizoma was also demonstrated. The method developed is simple and fast, which provides a new quality control method of Chinese herbs from the aspect of odor. Conclusion: It has shown that this electronic nose based metal oxide sensor is sensitive to sulfur and sulfides. We suggest that it can serve as a supportive method to detect residual sulfur and sulfides. PMID:24914293
High throughput protein production screening
Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA
2009-09-08
Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.
An online database for plant image analysis software tools.
Lobet, Guillaume; Draye, Xavier; Périlleux, Claire
2013-10-09
Recent years have seen an increase in methods for plant phenotyping using image analyses. These methods require new software solutions for data extraction and treatment. These solutions are instrumental in supporting various research pipelines, ranging from the localisation of cellular compounds to the quantification of tree canopies. However, due to the variety of existing tools and the lack of central repository, it is challenging for researchers to identify the software that is best suited for their research. We present an online, manually curated, database referencing more than 90 plant image analysis software solutions. The website, plant-image-analysis.org, presents each software in a uniform and concise manner enabling users to identify the available solutions for their experimental needs. The website also enables user feedback, evaluations and new software submissions. The plant-image-analysis.org database provides an overview of existing plant image analysis software. The aim of such a toolbox is to help users to find solutions, and to provide developers a way to exchange and communicate about their work.
ANALYTICAL METHOD DEVELOPMENT FOR THE ANALYSIS OF N-NITROSODIMETHYLAMINE (NDMA) IN DRINKING WATER
N-Nitrosodimethylamine (NDMA), a by-product of the manufacture of liquid rocket fuel, has recently been identified as a contaminant in several California drinking water sources. The initial source of the contamination was identified as an aerospace facility. Subsequent testing ...
Solliec, Morgan; Roy-Lachapelle, Audrey; Sauvé, Sébastien
2015-12-30
Swine manure can contain a wide range of veterinary antibiotics, which could enter the environment via manure spreading on agricultural fields. A suspect and non-target screening method was applied to swine manure samples to attempt to identify veterinary antibiotics and pharmaceutical compounds for a future targeted analysis method. A combination of suspect and non-target screening method was developed to identify various veterinary antibiotic families using liquid chromatography coupled with high-resolution mass spectrometry (LC/HRMS). The sample preparation was based on the physicochemical parameters of antibiotics for the wide scope extraction of polar compounds prior to LC/HRMS analysis. The amount of data produced was processed by applying restrictive thresholds and filters to significantly reduce the number of compounds found and eliminate matrix components. The suspect and non-target screening was applied on swine manure samples and revealed the presence of seven common veterinary antibiotics and some of their relative metabolites, including tetracyclines, β-lactams, sulfonamides and lincosamides. However, one steroid and one analgesic were also identified. The occurrence of the identified compounds was validated by comparing their retention times, isotopic abundance patterns and fragmentation patterns with certified standards. This identification method could be very useful as an initial step to screen for and identify emerging contaminants such as veterinary antibiotics and pharmaceuticals in environmental and biological matrices prior to quantification. Copyright © 2015 John Wiley & Sons, Ltd.
George, Iniga S; Fennell, Anne Y; Haynes, Paul A
2015-09-01
Protein sample preparation optimisation is critical for establishing reproducible high throughput proteomic analysis. In this study, two different fractionation sample preparation techniques (in-gel digestion and in-solution digestion) for shotgun proteomics were used to quantitatively compare proteins identified in Vitis riparia leaf samples. The total number of proteins and peptides identified were compared between filter aided sample preparation (FASP) coupled with gas phase fractionation (GPF) and SDS-PAGE methods. There was a 24% increase in the total number of reproducibly identified proteins when FASP-GPF was used. FASP-GPF is more reproducible, less expensive and a better method than SDS-PAGE for shotgun proteomics of grapevine samples as it significantly increases protein identification across biological replicates. Total peptide and protein information from the two fractionation techniques is available in PRIDE with the identifier PXD001399 (http://proteomecentral.proteomexchange.org/dataset/PXD001399). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Asadi Haroni, Hooshang; Hassan Tabatabaei, Seyed
2016-04-01
Muteh gold mining area is located in 160 km NW of Isfahan town. Gold mineralization is meso-thermal type and associated with silisic, seresitic and carbonate alterations as well as with hematite and goethite. Image processing and interpretation were applied on the ASTER satellite imagery data of about 400 km2 at the Muteh gold mining area to identify hydrothermal alterations and iron oxides associated with gold mineralization. After applying preprocessing methods such as radiometric and geometric corrections, image processing methods of Principal Components Analysis (PCA), Least Square Fit (Ls-Fit) and Spectral Angle Mapper (SAM) were applied on the ASTER data to identify hydrothermal alterations and iron oxides. In this research reference spectra of minerals such as chlorite, hematite, clay minerals and phengite identified from laboratory spectral analysis of collected samples were used to map the hydrothermal alterations. Finally, identified hydrothermal alteration and iron oxides were validated by visiting and sampling some of the mapped hydrothermal alterations.
Hill, Sarah R; Vale, Luke; Hunter, David; Henderson, Emily; Oluboyede, Yemi
2017-12-01
Public health interventions have unique characteristics compared to health technologies, which present additional challenges for economic evaluation (EE). High quality EEs that are able to address the particular methodological challenges are important for public health decision-makers. In England, they are even more pertinent given the transition of public health responsibilities in 2013 from the National Health Service to local government authorities where new agents are shaping policy decisions. Addressing alcohol misuse is a globally prioritised public health issue. This article provides a systematic review of EE and priority-setting studies for interventions to prevent and reduce alcohol misuse published internationally over the past decade (2006-2016). This review appraises the EE and priority-setting evidence to establish whether it is sufficient to meet the informational needs of public health decision-makers. 619 studies were identified via database searches. 7 additional studies were identified via hand searching journals, grey literature and reference lists. 27 met inclusion criteria. Methods identified included cost-utility analysis (18), cost-effectiveness analysis (6), cost-benefit analysis (CBA) (1), cost-consequence analysis (CCA) (1) and return-on-investment (1). The review identified a lack of consideration of methodological challenges associated with evaluating public health interventions and limited use of methods such as CBA and CCA which have been recommended as potentially useful for EE in public health. No studies using other specific priority-setting tools were identified. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
7TMRmine: a Web server for hierarchical mining of 7TMR proteins
Lu, Guoqing; Wang, Zhifang; Jones, Alan M; Moriyama, Etsuko N
2009-01-01
Background Seven-transmembrane region-containing receptors (7TMRs) play central roles in eukaryotic signal transduction. Due to their biomedical importance, thorough mining of 7TMRs from diverse genomes has been an active target of bioinformatics and pharmacogenomics research. The need for new and accurate 7TMR/GPCR prediction tools is paramount with the accelerated rate of acquisition of diverse sequence information. Currently available and often used protein classification methods (e.g., profile hidden Markov Models) are highly accurate for identifying their membership information among already known 7TMR subfamilies. However, these alignment-based methods are less effective for identifying remote similarities, e.g., identifying proteins from highly divergent or possibly new 7TMR families. In this regard, more sensitive (e.g., alignment-free) methods are needed to complement the existing protein classification methods. A better strategy would be to combine different classifiers, from more specific to more sensitive methods, to identify a broader spectrum of 7TMR protein candidates. Description We developed a Web server, 7TMRmine, by integrating alignment-free and alignment-based classifiers specifically trained to identify candidate 7TMR proteins as well as transmembrane (TM) prediction methods. This new tool enables researchers to easily assess the distribution of GPCR functionality in diverse genomes or individual newly-discovered proteins. 7TMRmine is easily customized and facilitates exploratory analysis of diverse genomes. Users can integrate various alignment-based, alignment-free, and TM-prediction methods in any combination and in any hierarchical order. Sixteen classifiers (including two TM-prediction methods) are available on the 7TMRmine Web server. Not only can the 7TMRmine tool be used for 7TMR mining, but also for general TM-protein analysis. Users can submit protein sequences for analysis, or explore pre-analyzed results for multiple genomes. The server currently includes prediction results and the summary statistics for 68 genomes. Conclusion 7TMRmine facilitates the discovery of 7TMR proteins. By combining prediction results from different classifiers in a multi-level filtering process, prioritized sets of 7TMR candidates can be obtained for further investigation. 7TMRmine can be also used as a general TM-protein classifier. Comparisons of TM and 7TMR protein distributions among 68 genomes revealed interesting differences in evolution of these protein families among major eukaryotic phyla. PMID:19538753
Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C
2014-12-01
It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.
Methods and approaches in the topology-based analysis of biological pathways
Mitrea, Cristina; Taghavi, Zeinab; Bokanizad, Behzad; Hanoudi, Samer; Tagett, Rebecca; Donato, Michele; Voichiţa, Călin; Drăghici, Sorin
2013-01-01
The goal of pathway analysis is to identify the pathways significantly impacted in a given phenotype. Many current methods are based on algorithms that consider pathways as simple gene lists, dramatically under-utilizing the knowledge that such pathways are meant to capture. During the past few years, a plethora of methods claiming to incorporate various aspects of the pathway topology have been proposed. These topology-based methods, sometimes referred to as “third generation,” have the potential to better model the phenomena described by pathways. Although there is now a large variety of approaches used for this purpose, no review is currently available to offer guidance for potential users and developers. This review covers 22 such topology-based pathway analysis methods published in the last decade. We compare these methods based on: type of pathways analyzed (e.g., signaling or metabolic), input (subset of genes, all genes, fold changes, gene p-values, etc.), mathematical models, pathway scoring approaches, output (one or more pathway scores, p-values, etc.) and implementation (web-based, standalone, etc.). We identify and discuss challenges, arising both in methodology and in pathway representation, including inconsistent terminology, different data formats, lack of meaningful benchmarks, and the lack of tissue and condition specificity. PMID:24133454
Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Atkins, Harold L.; Pampell, Alyssa
2011-01-01
A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.
Christ, Ana Paula Guarnieri; Ramos, Solange Rodrigues; Cayô, Rodrigo; Gales, Ana Cristina; Hachich, Elayse Maria; Sato, Maria Inês Zanoli
2017-05-15
MALDI-TOF Mass Spectrometry Biotyping has proven to be a reliable method for identifying bacteria at the species level based on the analysis of the ribosomal proteins mass fingerprint. We evaluate the usefulness of this method to identify Enterococcus species isolated from marine recreational water at Brazilian beaches. A total of 127 Enterococcus spp. isolates were identified to species level by bioMérieux's API® 20 Strep and MALDI-TOF systems. The biochemical test identified 117/127 isolates (92%), whereas MALDI identified 100% of the isolates, with an agreement of 63% between the methods. The 16S rRNA gene sequencing of isolates with discrepant results showed that MALDI-TOF and API® correctly identified 74% and 11% of these isolates, respectively. This discrepancy probably relies on the bias of the API® has to identify clinical isolates. MALDI-TOF proved to be a feasible approach for identifying Enterococcus from environmental matrices increasing the rapidness and accuracy of results. Copyright © 2017 Elsevier Ltd. All rights reserved.
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
2013-01-01
Despite its prominence for characterization of complex mixtures, LC–MS/MS frequently fails to identify many proteins. Network-based analysis methods, based on protein–protein interaction networks (PPINs), biological pathways, and protein complexes, are useful for recovering non-detected proteins, thereby enhancing analytical resolution. However, network-based analysis methods do come in varied flavors for which the respective efficacies are largely unknown. We compare the recovery performance and functional insights from three distinct instances of PPIN-based approaches, viz., Proteomics Expansion Pipeline (PEP), Functional Class Scoring (FCS), and Maxlink, in a test scenario of valproic acid (VPA)-treated mice. We find that the most comprehensive functional insights, as well as best non-detected protein recovery performance, are derived from FCS utilizing real biological complexes. This outstrips other network-based methods such as Maxlink or Proteomics Expansion Pipeline (PEP). From FCS, we identified known biological complexes involved in epigenetic modifications, neuronal system development, and cytoskeletal rearrangements. This is congruent with the observed phenotype where adult mice showed an increase in dendritic branching to allow the rewiring of visual cortical circuitry and an improvement in their visual acuity when tested behaviorally. In addition, PEP also identified a novel complex, comprising YWHAB, NR1, NR2B, ACTB, and TJP1, which is functionally related to the observed phenotype. Although our results suggest different network analysis methods can produce different results, on the whole, the findings are mutually supportive. More critically, the non-overlapping information each provides can provide greater holistic understanding of complex phenotypes. PMID:23557376
Lu, Mingbo; Zhang, Yang'e; Zhao, Chunfang; Zhou, Pengpeng; Yu, Longjiang
2010-01-01
This study presents an HPLC method for simultaneous analysis of astaxanthin and its carotenoid precursors from Xanthophyllomyces dendrorhous. The HPLC method is accomplished by employing a C18 column and the mobile phase methanol/water/acetonitrile/ dichloromethane (70:4:13:13, v/v/v/v). Astaxanthin is quantified by detection at 480 nm. The carotenoid precursors are identified by LC-APCI-MS and UV-vis absorption spectra. Peaks showed in the HPLC chromatogram are identified as carotenoids in the monocyclic biosynthetic pathway or their derivatives. In the monocyclic carotenoid pathway, 3,3'-dihydroxy-beta,psi-carotene-4,4'-dione (DCD) is produced through gamma-carotene and torulene.
Evaluation of Proteomic Search Engines for the Analysis of Histone Modifications
2015-01-01
Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118. PMID:25167464
Evaluation of proteomic search engines for the analysis of histone modifications.
Yuan, Zuo-Fei; Lin, Shu; Molden, Rosalynn C; Garcia, Benjamin A
2014-10-03
Identification of histone post-translational modifications (PTMs) is challenging for proteomics search engines. Including many histone PTMs in one search increases the number of candidate peptides dramatically, leading to low search speed and fewer identified spectra. To evaluate database search engines on identifying histone PTMs, we present a method in which one kind of modification is searched each time, for example, unmodified, individually modified, and multimodified, each search result is filtered with false discovery rate less than 1%, and the identifications of multiple search engines are combined to obtain confident results. We apply this method for eight search engines on histone data sets. We find that two search engines, pFind and Mascot, identify most of the confident results at a reasonable speed, so we recommend using them to identify histone modifications. During the evaluation, we also find some important aspects for the analysis of histone modifications. Our evaluation of different search engines on identifying histone modifications will hopefully help those who are hoping to enter the histone proteomics field. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD001118.
How to conduct a qualitative meta-analysis: Tailoring methods to enhance methodological integrity.
Levitt, Heidi M
2018-05-01
Although qualitative research has long been of interest in the field of psychology, meta-analyses of qualitative literatures (sometimes called meta-syntheses) are still quite rare. Like quantitative meta-analyses, these methods function to aggregate findings and identify patterns across primary studies, but their aims, procedures, and methodological considerations may vary. This paper explains the function of qualitative meta-analyses and their methodological development. Recommendations have broad relevance but are framed with an eye toward their use in psychotherapy research. Rather than arguing for the adoption of any single meta-method, this paper advocates for considering how procedures can best be selected and adapted to enhance a meta-study's methodological integrity. Through the paper, recommendations are provided to help researchers identify procedures that can best serve their studies' specific goals. Meta-analysts are encouraged to consider the methodological integrity of their studies in relation to central research processes, including identifying a set of primary research studies, transforming primary findings into initial units of data for a meta-analysis, developing categories or themes, and communicating findings. The paper provides guidance for researchers who desire to tailor meta-analytic methods to meet their particular goals while enhancing the rigor of their research.
Identification of Load Categories in Rotor System Based on Vibration Analysis
Yang, Zhaojian
2017-01-01
Rotating machinery is often subjected to variable loads during operation. Thus, monitoring and identifying different load types is important. Here, five typical load types have been qualitatively studied for a rotor system. A novel load category identification method for rotor system based on vibration signals is proposed. This method is a combination of ensemble empirical mode decomposition (EEMD), energy feature extraction, and back propagation (BP) neural network. A dedicated load identification test bench for rotor system was developed. According to loads characteristics and test conditions, an experimental plan was formulated, and loading tests for five loads were conducted. Corresponding vibration signals of the rotor system were collected for each load condition via eddy current displacement sensor. Signals were reconstructed using EEMD, and then features were extracted followed by energy calculations. Finally, characteristics were input to the BP neural network, to identify different load types. Comparison and analysis of identifying data and test data revealed a general identification rate of 94.54%, achieving high identification accuracy and good robustness. This shows that the proposed method is feasible. Due to reliable and experimentally validated theoretical results, this method can be applied to load identification and fault diagnosis for rotor equipment used in engineering applications. PMID:28726754
Kandadai, Venk; Yang, Haodong; Jiang, Ling; Yang, Christopher C; Fleisher, Linda; Winston, Flaura Koplin
2016-05-05
Little is known about the ability of individual stakeholder groups to achieve health information dissemination goals through Twitter. This study aimed to develop and apply methods for the systematic evaluation and optimization of health information dissemination by stakeholders through Twitter. Tweet content from 1790 followers of @SafetyMD (July-November 2012) was examined. User emphasis, a new indicator of Twitter information dissemination, was defined and applied to retweets across two levels of retweeters originating from @SafetyMD. User interest clusters were identified based on principal component analysis (PCA) and hierarchical cluster analysis (HCA) of a random sample of 170 followers. User emphasis of keywords remained across levels but decreased by 9.5 percentage points. PCA and HCA identified 12 statistically unique clusters of followers within the @SafetyMD Twitter network. This study is one of the first to develop methods for use by stakeholders to evaluate and optimize their use of Twitter to disseminate health information. Our new methods provide preliminary evidence that individual stakeholders can evaluate the effectiveness of health information dissemination and create content-specific clusters for more specific targeted messaging.
Neural net controlled tag gas sampling system for nuclear reactors
Gross, Kenneth C.; Laug, Matthew T.; Lambert, John D. B.; Herzog, James P.
1997-01-01
A method and system for providing a tag gas identifier to a nuclear fuel rod and analyze escaped tag gas to identify a particular failed nuclear fuel rod. The method and system include disposing a unique tag gas composition into a plenum of a nuclear fuel rod, monitoring gamma ray activity, analyzing gamma ray signals to assess whether a nuclear fuel rod has failed and is emitting tag gas, activating a tag gas sampling and analysis system upon sensing tag gas emission from a failed nuclear rod and evaluating the escaped tag gas to identify the particular failed nuclear fuel rod.
Segmentation and clustering as complementary sources of information
NASA Astrophysics Data System (ADS)
Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.
2007-03-01
This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.
The Use of Citation Counting to Identify Research Trends
ERIC Educational Resources Information Center
Rothman, Harry; Woodhead, Michael
1971-01-01
The analysis and application of manpower statistics to identify some long-term international research trends in economic entomology and pest conrol are described. Movements in research interests, particularly towards biological methods of control, correlations between these sectors, and the difficulties encountered in the construction of a…
Proteomic analysis of mare follicular fluid during late follicle development.
Fahiminiya, Somayyeh; Labas, Valérie; Roche, Stéphane; Dacheux, Jean-Louis; Gérard, Nadine
2011-09-17
Follicular fluid accumulates into the antrum of follicle from the early stage of follicle development. Studies on its components may contribute to a better understanding of the mechanisms underlying follicular development and oocyte quality. With this objective, we performed a proteomic analysis of mare follicular fluid. First, we hypothesized that proteins in follicular fluid may differ from those in the serum, and also may change during follicle development. Second, we used four different approaches of Immunodepletion and one enrichment method, in order to overcome the masking effect of high-abundance proteins present in the follicular fluid, and to identify those present in lower abundance. Finally, we compared our results with previous studies performed in mono-ovulant (human) and poly-ovulant (porcine and canine) species in an attempt to identify common and/or species-specific proteins. Follicular fluid samples were collected from ovaries at three different stages of follicle development (early dominant, late dominant and preovulatory). Blood samples were also collected at each time. The proteomic analysis was carried out on crude, depleted and enriched follicular fluid by 2D-PAGE, 1D-PAGE and mass spectrometry. Total of 459 protein spots were visualized by 2D-PAGE of crude mare follicular fluid, with no difference among the three physiological stages. Thirty proteins were observed as differentially expressed between serum and follicular fluid. Enrichment method was found to be the most powerful method for detection and identification of low-abundance proteins from follicular fluid. Actually, we were able to identify 18 proteins in the crude follicular fluid, and as many as 113 in the enriched follicular fluid. Inhibins and a few other proteins involved in reproduction could only be identified after enrichment of follicular fluid, demonstrating the power of the method used. The comparison of proteins found in mare follicular fluid with proteins previously identified in human, porcine and canine follicular fluids, led to the identification of 12 common proteins and of several species-specific proteins. This study provides the first description of mare follicular fluid proteome during the late follicle development stages. We identified several proteins from crude, depleted and enriched follicular fluid. Our results demonstrate that the enrichment method, combined with 2D-PAGE and mass spectrometry, can be successfully used to visualize and further identify the low-abundance proteins in the follicular fluid.
Do you see what I see? Mobile eye-tracker contextual analysis and inter-rater reliability.
Stuart, S; Hunt, D; Nell, J; Godfrey, A; Hausdorff, J M; Rochester, L; Alcock, L
2018-02-01
Mobile eye-trackers are currently used during real-world tasks (e.g. gait) to monitor visual and cognitive processes, particularly in ageing and Parkinson's disease (PD). However, contextual analysis involving fixation locations during such tasks is rarely performed due to its complexity. This study adapted a validated algorithm and developed a classification method to semi-automate contextual analysis of mobile eye-tracking data. We further assessed inter-rater reliability of the proposed classification method. A mobile eye-tracker recorded eye-movements during walking in five healthy older adult controls (HC) and five people with PD. Fixations were identified using a previously validated algorithm, which was adapted to provide still images of fixation locations (n = 116). The fixation location was manually identified by two raters (DH, JN), who classified the locations. Cohen's kappa correlation coefficients determined the inter-rater reliability. The algorithm successfully provided still images for each fixation, allowing manual contextual analysis to be performed. The inter-rater reliability for classifying the fixation location was high for both PD (kappa = 0.80, 95% agreement) and HC groups (kappa = 0.80, 91% agreement), which indicated a reliable classification method. This study developed a reliable semi-automated contextual analysis method for gait studies in HC and PD. Future studies could adapt this methodology for various gait-related eye-tracking studies.
Guais, Olivier; Borderies, Gisèle; Pichereaux, Carole; Maestracci, Marc; Neugnot, Virginie; Rossignol, Michel; François, Jean Marie
2008-12-01
MS/MS techniques are well customized now for proteomic analysis, even for non-sequenced organisms, since peptide sequences obtained by these methods can be matched with those found in databases from closely related sequenced organisms. We used this approach to characterize the protein content of the "Rovabio Excel", an enzymatic cocktail produced by Penicillium funiculosum that is used as feed additive in animal nutrition. Protein separation by bi-dimensional electrophoresis yielded more than 100 spots, from which 37 proteins were unambiguously assigned from peptide sequences. By one-dimensional SDS-gel electrophoresis, 34 proteins were identified among which 8 were not found in the 2-DE analysis. A third method, termed 'peptidic shotgun', which consists in a direct treatment of the cocktail by trypsin followed by separation of the peptides on two-dimensional liquid chromatography, resulted in the identification of two additional proteins not found by the two other methods. Altogether, more than 50 proteins, among which several glycosylhydrolytic, hemicellulolytic and proteolytic enzymes, were identified by combining three separation methods in this enzymatic cocktail. This work confirmed the power of proteome analysis to explore the genome expression of a non-sequenced fungus by taking advantage of sequences from phylogenetically related filamentous fungi and pave the way for further functional analysis of P. funiculosum.
The Essential Genome of Escherichia coli K-12
2018-01-01
ABSTRACT Transposon-directed insertion site sequencing (TraDIS) is a high-throughput method coupling transposon mutagenesis with short-fragment DNA sequencing. It is commonly used to identify essential genes. Single gene deletion libraries are considered the gold standard for identifying essential genes. Currently, the TraDIS method has not been benchmarked against such libraries, and therefore, it remains unclear whether the two methodologies are comparable. To address this, a high-density transposon library was constructed in Escherichia coli K-12. Essential genes predicted from sequencing of this library were compared to existing essential gene databases. To decrease false-positive identification of essential genes, statistical data analysis included corrections for both gene length and genome length. Through this analysis, new essential genes and genes previously incorrectly designated essential were identified. We show that manual analysis of TraDIS data reveals novel features that would not have been detected by statistical analysis alone. Examples include short essential regions within genes, orientation-dependent effects, and fine-resolution identification of genome and protein features. Recognition of these insertion profiles in transposon mutagenesis data sets will assist genome annotation of less well characterized genomes and provides new insights into bacterial physiology and biochemistry. PMID:29463657
Non-destructive testing of full-length bonded rock bolts based on HHT signal analysis
NASA Astrophysics Data System (ADS)
Shi, Z. M.; Liu, L.; Peng, M.; Liu, C. C.; Tao, F. J.; Liu, C. S.
2018-04-01
Full-length bonded rock bolts are commonly used in mining, tunneling and slope engineering because of their simple design and resistance to corrosion. However, the length of a rock bolt and grouting quality do not often meet the required design standards in practice because of the concealment and complexity of bolt construction. Non-destructive testing is preferred when testing a rock bolt's quality because of the convenience, low cost and wide detection range. In this paper, a signal analysis method for the non-destructive sound wave testing of full-length bonded rock bolts is presented, which is based on the Hilbert-Huang transform (HHT). First, we introduce the HHT analysis method to calculate the bolt length and identify defect locations based on sound wave reflection test signals, which includes decomposing the test signal via empirical mode decomposition (EMD), selecting the intrinsic mode functions (IMF) using the Pearson Correlation Index (PCI) and calculating the instantaneous phase and frequency via the Hilbert transform (HT). Second, six model tests are conducted using different grouting defects and bolt protruding lengths to verify the effectiveness of the HHT analysis method. Lastly, the influence of the bolt protruding length on the test signal, identification of multiple reflections from defects, bolt end and protruding end, and mode mixing from EMD are discussed. The HHT analysis method can identify the bolt length and grouting defect locations from signals that contain noise at multiple reflected interfaces. The reflection from the long protruding end creates an irregular test signal with many frequency peaks on the spectrum. The reflections from defects barely change the original signal because they are low energy, which cannot be adequately resolved using existing methods. The HHT analysis method can identify reflections from the long protruding end of the bolt and multiple reflections from grouting defects based on mutations in the instantaneous frequency, which makes weak reflections more noticeable. The mode mixing phenomenon is observed in several tests, but this does not markedly affect the identification results due to the simple medium in bolt tests. The mode mixing can be reduced by ensemble EMD (EEMD) or complete ensemble EMD with adaptive noise (CEEMDAN), which are powerful tools to used analyze the test signal in a complex medium and may play an important role in future studies. The HHT bolt signal analysis method is a self-adaptive and automatic process, which can be programed as analysis software and will make bolt tests more convenient in practice.
Methods for the evaluation of alternative disaster warning systems
NASA Technical Reports Server (NTRS)
Agnew, C. E.; Anderson, R. J., Jr.; Lanen, W. N.
1977-01-01
For each of the methods identified, a theoretical basis is provided and an illustrative example is described. The example includes sufficient realism and detail to enable an analyst to conduct an evaluation of other systems. The methods discussed in the study include equal capability cost analysis, consumers' surplus, and statistical decision theory.
ERIC Educational Resources Information Center
Potoczak, Kathryn; Carr, James E.; Michael, Jack
2007-01-01
Two distinct analytic methods have been used to identify the function of problem behavior. The antecedent-behavior-consequence (ABC) method (Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994) includes the delivery of consequences for problem behavior. The AB method (Carr & Durand, 1985) does not include consequence delivery, instead relying…
Farmers' Preferences for Methods of Receiving Information on New or Innovative Farming Practices.
ERIC Educational Resources Information Center
Riesenberg, Lou E.; Gor, Christopher Obel
1989-01-01
Survey of 386 Idaho farmers (response rate 58 percent) identified preferred methods of receiving information on new or innovative farming practices. Analysis revealed preference for interpersonal methods (demonstrations, tours, and field trips) over mass media such as computer-assisted instruction (CAI) and home study, although younger farmers,…
40 CFR 63.805 - Performance test methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Collection of Coating and Ink Samples for VOC Content Analysis by Reference Method 24 and Reference Method... determine the VHAP content of the liquid coating. Formulation data shall be used to identify VHAP present in... the solids content by weight and the density of coatings. If it is demonstrated to the satisfaction of...
Lepedda, Antonio J; Nieddu, Gabriele; Zinellu, Elisabetta; De Muro, Pierina; Piredda, Franco; Guarino, Anna; Spirito, Rita; Carta, Franco; Turrini, Francesco; Formato, Marilena
2013-01-01
Apolipoproteins are very heterogeneous protein family, implicated in plasma lipoprotein structural stabilization, lipid metabolism, inflammation, or immunity. Obtaining detailed information on apolipoprotein composition and structure may contribute to elucidating lipoprotein roles in atherogenesis and to developing new therapeutic strategies for the treatment of lipoprotein-associated disorders. This study aimed at developing a comprehensive method for characterizing the apolipoprotein component of plasma VLDL, LDL, and HDL fractions from patients undergoing carotid endarterectomy, by means of two-dimensional electrophoresis (2-DE) coupled with Mass Spectrometry analysis, useful for identifying potential markers of plaque presence and vulnerability. The adopted method allowed obtaining reproducible 2-DE maps of exchangeable apolipoproteins from VLDL, LDL, and HDL. Twenty-three protein isoforms were identified by peptide mass fingerprinting analysis. Differential proteomic analysis allowed for identifying increased levels of acute-phase serum amyloid A protein (AP SAA) in all lipoprotein fractions, especially in LDL from atherosclerotic patients. Results have been confirmed by western blotting analysis on each lipoprotein fraction using apo AI levels for data normalization. The higher levels of AP SAA found in patients suggest a role of LDL as AP SAA carrier into the subendothelial space of artery wall, where AP SAA accumulates and may exert noxious effects.
Chen Peng; Ao Li
2017-01-01
The emergence of multi-dimensional data offers opportunities for more comprehensive analysis of the molecular characteristics of human diseases and therefore improving diagnosis, treatment, and prevention. In this study, we proposed a heterogeneous network based method by integrating multi-dimensional data (HNMD) to identify GBM-related genes. The novelty of the method lies in that the multi-dimensional data of GBM from TCGA dataset that provide comprehensive information of genes, are combined with protein-protein interactions to construct a weighted heterogeneous network, which reflects both the general and disease-specific relationships between genes. In addition, a propagation algorithm with resistance is introduced to precisely score and rank GBM-related genes. The results of comprehensive performance evaluation show that the proposed method significantly outperforms the network based methods with single-dimensional data and other existing approaches. Subsequent analysis of the top ranked genes suggests they may be functionally implicated in GBM, which further corroborates the superiority of the proposed method. The source code and the results of HNMD can be downloaded from the following URL: http://bioinformatics.ustc.edu.cn/hnmd/ .
Richardson, A K; Clarke, G; Sabel, C E; Pearson, J F; Mason, D F; Taylor, B V
2012-11-01
Identifying eligible individuals for a prevalence survey is difficult in the absence of a disease register or a national population register. To develop a method to identify and invite eligible individuals to participate in a national prevalence survey while maintaining confidentiality and complying with privacy legislation. A unique identifier (based on date of birth, sex and initials) was developed so that database holders could identify eligible individuals, notify us and invite them on our behalf to participate in a national multiple sclerosis prevalence survey while maintaining confidentiality and complying with privacy legislation. Several organisations (including central government, health and non-governmental organisations) used the method described to assign unique identifiers to individuals listed on their databases and to forward invitations and consent forms to them. The use of a unique identifier allowed us to recognise and record all the sources of identification for each individual. This prevented double counting or approaching the same individual more than once and facilitated the use of capture-recapture methods to improve the prevalence estimate. Capture-recapture analysis estimated that the method identified over 96% of eligible individuals in this prevalence survey. This method was developed and used successfully in a national prevalence survey of multiple sclerosis in New Zealand. The method may be useful for prevalence surveys of other diseases in New Zealand and for prevalence surveys in other countries with similar privacy legislation and lack of disease registers and population registers. © 2012 The Authors; Internal Medicine Journal © 2012 Royal Australasian College of Physicians.
Lab-on-a-chip nucleic-acid analysis towards point-of-care applications
NASA Astrophysics Data System (ADS)
Kopparthy, Varun Lingaiah
Recent infectious disease outbreaks, such as Ebola in 2013, highlight the need for fast and accurate diagnostic tools to combat the global spread of the disease. Detection and identification of the disease-causing viruses and bacteria at the genetic level is required for accurate diagnosis of the disease. Nucleic acid analysis systems have shown promise in identifying diseases such as HIV, anthrax, and Ebola in the past. Conventional nucleic acid analysis systems are still time consuming, and are not suitable for point-ofcare applications. Miniaturized nucleic acid systems has shown great promise for rapid analysis, but they have not been commercialized due to several factors such as footprint, complexity, portability, and power consumption. This dissertation presents the development of technologies and methods for a labon-a-chip nucleic acid analysis towards point-of-care applications. An oscillatory-flow PCR methodology in a thermal gradient is developed which provides real-time analysis of nucleic-acid samples. Oscillating flow PCR was performed in the microfluidic device under thermal gradient in 40 minutes. Reverse transcription PCR (RT-PCR) was achieved in the system without an additional heating element for incubation to perform reverse transcription step. A novel method is developed for the simultaneous pattering and bonding of all-glass microfluidic devices in a microwave oven. Glass microfluidic devices were fabricated in less than 4 minutes. Towards an integrated system for the detection of amplified products, a thermal sensing method is studied for the optimization of the sensor output. Calorimetric sensing method is characterized to identify design considerations and optimal parameters such as placement of the sensor, steady state response, and flow velocity for improved performance. An understanding of these developed technologies and methods will facilitate the development of lab-on-a-chip systems for point-of-care analysis.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M
2008-01-01
Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163
Price Analysis on Commercial Item Purchases Within the Department of the Navy
2015-04-30
has advised 20 students , seven of whom worked on acquisition and contracting-related projects. Dr. Gera’s research is in networks, publishing 32...commercial item procurements. The importance of market research and price analysis methods has increased because of this change (Gera & Maddox, 2013...require that pricing be discussed in the market research reports (p. 54). The FAR identifies market research as a method for determining price
Collective feature selection to identify crucial epistatic variants.
Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D
2018-01-01
Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Meyer, B.J.; Sellers, J.P.; Thomsen, J.U.
1993-06-08
Apparatus and processes are described for recognizing and identifying materials. Characteristic spectra are obtained for the materials via spectroscopy techniques including nuclear magnetic resonance spectroscopy, infrared absorption analysis, x-ray analysis, mass spectroscopy and gas chromatography. Desired portions of the spectra may be selected and then placed in proper form and format for presentation to a number of input layer neurons in an offline neural network. The network is first trained according to a predetermined training process; it may then be employed to identify particular materials. Such apparatus and processes are particularly useful for recognizing and identifying organic compounds such as complex carbohydrates, whose spectra conventionally require a high level of training and many hours of hard work to identify, and are frequently indistinguishable from one another by human interpretation.
Boucheron, Laura E
2013-07-16
Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.
Identifications of ancient Egyptian royal mummies from the 18th Dynasty reconsidered.
Habicht, M E; Bouwman, A S; Rühli, F J
2016-01-01
For centuries, ancient Egyptian Royal mummies have drawn the attention both of the general public and scientists. Many royal mummies from the New Kingdom have survived. The discoveries of the bodies of these ancient rulers have always sparked much attention, yet not all identifications are clear even nowadays. This study presents a meta-analysis to demonstrate the difficulties in identifying ancient Egyptian royal mummies. Various methods and pitfalls in the identification of the Pharaohs are reassessed since new scientific methods can be used, such as ancient DNA-profiling and CT-scanning. While the ancestors of Tutankhamun have been identified, some identities are still highly controversial (e.g., the mystery of the KV-55 skeleton, recently most likely identified as the genetic father of Tutankhamun). The meta-analysis confirms the suggested identity of some mummies (e.g., Amenhotep III, Thutmosis IV, and Queen Tjye). © 2016 Wiley Periodicals, Inc.
Workplace Challenges: The Impact of Personal Beliefs and the Birth Environment.
Adams, Ellise D
This article reviews 2 workplace challenges faced by the perinatal nurse: the impact of personal beliefs and issues within the birth environment. It also explores how these challenges inform the birth practices of the perinatal nurse. The methods employed for this review are focus groups and a concept analysis. Two focus groups (n = 14) and a concept analysis based on a process defined by Walker and Avant provided a set of birth practices performed by the perinatal nurse who facilitates normal birth. Assertiveness was identified as a primary attribute of the perinatal nurse and several suggestions are identified as empirical referents or methods of measuring the abstract concepts, to identify the workplace challenges of the perinatal nurse. Development of effective processes, designed to overcome the many challenges facing the perinatal nurse, will assist in improving perinatal care for women and newborns.
Pathway analysis from lists of microRNAs: common pitfalls and alternative strategy
Godard, Patrice; van Eyll, Jonathan
2015-01-01
MicroRNAs (miRNAs) are involved in the regulation of gene expression at a post-transcriptional level. As such, monitoring miRNA expression has been increasingly used to assess their role in regulatory mechanisms of biological processes. In large scale studies, once miRNAs of interest have been identified, the target genes they regulate are often inferred using algorithms or databases. A pathway analysis is then often performed in order to generate hypotheses about the relevant biological functions controlled by the miRNA signature. Here we show that the method widely used in scientific literature to identify these pathways is biased and leads to inaccurate results. In addition to describing the bias and its origin we present an alternative strategy to identify potential biological functions specifically impacted by a miRNA signature. More generally, our study exemplifies the crucial need of relevant negative controls when developing, and using, bioinformatics methods. PMID:25800743
Nouri, Mohammad-Zaman; Komatsu, Setsuko
2010-05-01
To study the soybean plasma membrane proteome under osmotic stress, two methods were used: a gel-based and a LC MS/MS-based proteomics method. Two-day-old seedlings were subjected to 10% PEG for 2 days. Plasma membranes were purified from seedlings using a two-phase partitioning method and their purity was verified by measuring ATPase activity. Using the gel-based proteomics, four and eight protein spots were identified as up- and downregulated, respectively, whereas in the nanoLC MS/MS approach, 11 and 75 proteins were identified as up- and downregulated, respectively, under PEG treatment. Out of osmotic stress responsive proteins, most of the transporter proteins and all proteins with high number of transmembrane helices as well as low-abundance proteins could be identified by the LC MS/MS-based method. Three homologues of plasma membrane H(+)-ATPase, which are transporter proteins involved in ion efflux, were upregulated under osmotic stress. Gene expression of this protein was increased after 12 h of stress exposure. Among the identified proteins, seven proteins were mutual in two proteomics techniques, in which calnexin was the highly upregulated protein. Accumulation of calnexin in plasma membrane was confirmed by immunoblot analysis. These results suggest that under hyperosmotic conditions, calnexin accumulates in the plasma membrane and ion efflux accelerates by upregulation of plasma membrane H(+)-ATPase protein.
Kowalczyk, Marek; Sekuła, Andrzej; Mleczko, Piotr; Olszowy, Zofia; Kujawa, Anna; Zubek, Szymon; Kupiec, Tomasz
2015-02-01
To assess the usefulness of a DNA-based method for identifying mushroom species for application in forensic laboratory practice. Two hundred twenty-one samples of clinical forensic material (dried mushrooms, food remains, stomach contents, feces, etc) were analyzed. ITS2 region of nuclear ribosomal DNA (nrDNA) was sequenced and the sequen-ces were compared with reference sequences collected from the National Center for Biotechnology Information gene bank (GenBank). Sporological identification of mushrooms was also performed for 57 samples of clinical material. Of 221 samples, positive sequencing results were obtained for 152 (69%). The highest percentage of positive results was obtained for samples of dried mushrooms (96%) and food remains (91%). Comparison with GenBank sequences enabled identification of all samples at least at the genus level. Most samples (90%) were identified at the level of species or a group of closely related species. Sporological and molecular identification were consistent at the level of species or genus for 30% of analyzed samples. Molecular analysis identified a larger number of species than sporological method. It proved to be suitable for analysis of evidential material (dried hallucinogenic mushrooms) in forensic genetic laboratories as well as to complement classical methods in the analysis of clinical material.
Bruno, C; Patin, F; Bocca, C; Nadal-Desbarats, L; Bonnier, F; Reynier, P; Emond, P; Vourc'h, P; Joseph-Delafont, K; Corcia, P; Andres, C R; Blasco, H
2018-01-30
Metabolomics is an emerging science based on diverse high throughput methods that are rapidly evolving to improve metabolic coverage of biological fluids and tissues. Technical progress has led researchers to combine several analytical methods without reporting the impact on metabolic coverage of such a strategy. The objective of our study was to develop and validate several analytical techniques (mass spectrometry coupled to gas or liquid chromatography and nuclear magnetic resonance) for the metabolomic analysis of small muscle samples and evaluate the impact of combining methods for more exhaustive metabolite covering. We evaluated the muscle metabolome from the same pool of mouse muscle samples after 2 metabolite extraction protocols. Four analytical methods were used: targeted flow injection analysis coupled with mass spectrometry (FIA-MS/MS), gas chromatography coupled with mass spectrometry (GC-MS), liquid chromatography coupled with high-resolution mass spectrometry (LC-HRMS), and nuclear magnetic resonance (NMR) analysis. We evaluated the global variability of each compound i.e., analytical (from quality controls) and extraction variability (from muscle extracts). We determined the best extraction method and we reported the common and distinct metabolites identified based on the number and identity of the compounds detected with low analytical variability (variation coefficient<30%) for each method. Finally, we assessed the coverage of muscle metabolic pathways obtained. Methanol/chloroform/water and water/methanol were the best extraction solvent for muscle metabolome analysis by NMR and MS, respectively. We identified 38 metabolites by nuclear magnetic resonance, 37 by FIA-MS/MS, 18 by GC-MS, and 80 by LC-HRMS. The combination led us to identify a total of 132 metabolites with low variability partitioned into 58 metabolic pathways, such as amino acid, nitrogen, purine, and pyrimidine metabolism, and the citric acid cycle. This combination also showed that the contribution of GC-MS was low when used in combination with other mass spectrometry methods and nuclear magnetic resonance to explore muscle samples. This study reports the validation of several analytical methods, based on nuclear magnetic resonance and several mass spectrometry methods, to explore the muscle metabolome from a small amount of tissue, comparable to that obtained during a clinical trial. The combination of several techniques may be relevant for the exploration of muscle metabolism, with acceptable analytical variability and overlap between methods However, the difficult and time-consuming data pre-processing, processing, and statistical analysis steps do not justify systematically combining analytical methods. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
van der Molen, Hugo H.
1984-01-01
Describes a study designed to demonstrate that child pedestrian training objectives may be identified systematically through various task analysis methods, making use of different types of empirical information. Early approaches to analysis of pedestrian tasks are reviewed, and an outline of the Traffic Research Centre's pedestrian task analysis…
A Systematic Review of Brief Functional Analysis Methodology with Typically Developing Children
ERIC Educational Resources Information Center
Gardner, Andrew W.; Spencer, Trina D.; Boelter, Eric W.; DuBard, Melanie; Jennett, Heather K.
2012-01-01
Brief functional analysis (BFA) is an abbreviated assessment methodology derived from traditional extended functional analysis methods. BFAs are often conducted when time constraints in clinics, schools or homes are of concern. While BFAs have been used extensively to identify the function of problem behavior for children with disabilities, their…
Developments in Cylindrical Shell Stability Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Starnes, James H., Jr.
1998-01-01
Today high-performance computing systems and new analytical and numerical techniques enable engineers to explore the use of advanced materials for shell design. This paper reviews some of the historical developments of shell buckling analysis and design. The paper concludes by identifying key research directions for reliable and robust methods development in shell stability analysis and design.
Inclusion of Aging in Rehabilitation Counseling Journals 2000-2012: A Content Analysis
ERIC Educational Resources Information Center
Kettaneh, Amani A.; Kinyanjui, Benson; Slevin, John R.; Slevin, Barbara; Harley, Debra A.
2015-01-01
Purpose: To conduct a content analysis of the rehabilitation counseling literature to identify articles published on aging. Method: To determine the number of articles that were published on aging in rehabilitation counseling journals, a content analysis of articles from 2000 through 2012 was performed. For purposes of this review, only…
49 CFR Appendix D to Part 172 - Rail Risk Analysis Factors
Code of Federal Regulations, 2012 CFR
2012-10-01
... nature of the rail system, each carrier must select and document the analysis method/model used and identify the routes to be analyzed. D. The safety and security risk analysis must consider current data and... curvature; 7. Presence or absence of signals and train control systems along the route (“dark” versus...
Matsuyama, T; Fukuda, Y; Sakai, T; Tanimoto, N; Nakanishi, M; Nakamura, Y; Takano, T; Nakayasu, C
2017-08-01
Bacterial haemolytic jaundice caused by Ichthyobacterium seriolicida has been responsible for mortality in farmed yellowtail, Seriola quinqueradiata, in western Japan since the 1980s. In this study, polymorphic analysis of I. seriolicida was performed using three molecular methods: amplified fragment length polymorphism (AFLP) analysis, multilocus sequence typing (MLST) and multiple-locus variable-number tandem repeat analysis (MLVA). Twenty-eight isolates were analysed using AFLP, while 31 isolates were examined by MLST and MLVA. No polymorphisms were identified by AFLP analysis using EcoRI and MseI, or by MLST of internal fragments of eight housekeeping genes. However, MLVA revealed variation in repeat numbers of three elements, allowing separation of the isolates into 16 sequence types. The unweighted pair group method using arithmetic averages cluster analysis of the MLVA data identified four major clusters, and all isolates belonged to clonal complexes. It is likely that I. seriolicida populations share a common ancestor, which may be a recently introduced strain. © 2016 John Wiley & Sons Ltd.
Healing X-ray scattering images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jiliang; Lhermitte, Julien; Tian, Ye
X-ray scattering images contain numerous gaps and defects arising from detector limitations and experimental configuration. Here, we present a method to heal X-ray scattering images, filling gaps in the data and removing defects in a physically meaningful manner. Unlike generic inpainting methods, this method is closely tuned to the expected structure of reciprocal-space data. In particular, we exploit statistical tests and symmetry analysis to identify the structure of an image; we then copy, average and interpolate measured data into gaps in a way that respects the identified structure and symmetry. Importantly, the underlying analysis methods provide useful characterization of structuresmore » present in the image, including the identification of diffuseversussharp features, anisotropy and symmetry. The presented method leverages known characteristics of reciprocal space, enabling physically reasonable reconstruction even with large image gaps. The method will correspondingly fail for images that violate these underlying assumptions. The method assumes point symmetry and is thus applicable to small-angle X-ray scattering (SAXS) data, but only to a subset of wide-angle data. Our method succeeds in filling gaps and healing defects in experimental images, including extending data beyond the original detector borders.« less
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Healing X-ray scattering images
Liu, Jiliang; Lhermitte, Julien; Tian, Ye; ...
2017-05-24
X-ray scattering images contain numerous gaps and defects arising from detector limitations and experimental configuration. Here, we present a method to heal X-ray scattering images, filling gaps in the data and removing defects in a physically meaningful manner. Unlike generic inpainting methods, this method is closely tuned to the expected structure of reciprocal-space data. In particular, we exploit statistical tests and symmetry analysis to identify the structure of an image; we then copy, average and interpolate measured data into gaps in a way that respects the identified structure and symmetry. Importantly, the underlying analysis methods provide useful characterization of structuresmore » present in the image, including the identification of diffuseversussharp features, anisotropy and symmetry. The presented method leverages known characteristics of reciprocal space, enabling physically reasonable reconstruction even with large image gaps. The method will correspondingly fail for images that violate these underlying assumptions. The method assumes point symmetry and is thus applicable to small-angle X-ray scattering (SAXS) data, but only to a subset of wide-angle data. Our method succeeds in filling gaps and healing defects in experimental images, including extending data beyond the original detector borders.« less
NASA Astrophysics Data System (ADS)
Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir
2017-06-01
This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.
Monsen, T; Ryden, P
2017-09-01
Urinary tract infections (UTIs) are among the most common bacterial infections in men and urine culture is gold standard for diagnosis. Considering the high prevalence of culture-negative specimens, any method that identifies such specimens is of interest. The aim was to evaluate a new screening concept for flow cytometry analysis (FCA). The outcomes were evaluated against urine culture, uropathogen species and three conventional screening methods. A prospective, consecutive study examined 1,312 urine specimens, collected during January and February 2012. The specimens were analyzed using the Sysmex UF1000i FCA. Based on the FCA data culture negative specimens were identified in a new model by use of linear discriminant analysis (FCA-LDA). In total 1,312 patients were included. In- and outpatients represented 19.6% and 79.4%, respectively; 68.3% of the specimens originated from women. Of the 610 culture-positive specimens, Escherichia coli represented 64%, enterococci 8% and Klebsiella spp. 7%. Screening with FCA-LDA at 95% sensitivity identified 42% (552/1312) as culture negative specimens when UTI was defined according to European guidelines. The proposed screening method was either superior or similar in comparison to the three conventional screening methods. In conclusion, the proposed/suggested and new FCA-LDA screening method was superior or similar to three conventional screening methods. We recommend the proposed screening method to be used in clinic to exclude culture negative specimens, to reduce workload, costs and the turnaround time. In addition, the FCA data may add information that enhance handling and support diagnosis of patients with suspected UTI pending urine culture [corrected].
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biegalski, Steven R.; Buchholz, Bruce A.
2011-08-24
The objective of this work is to identify isotopic ratios suitable for analysis via mass spectrometry that distinguish between commercial nuclear reactor fuel cycles, fuel cycles for weapons grade plutonium, and products from nuclear weapons explosions. Methods will also be determined to distinguish the above from medical and industrial radionuclide sources. Mass spectrometry systems will be identified that are suitable for field measurement of such isotopes in an expedient manner.
A qualitative method using 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by analysis with liquid chromatography (LC)/negative ion-electrospray mass spectrometry (MS) was developed for identifying polar aldehydes and ketones in ozonated drinking water. This method offe...
Mapping brain activity in gradient-echo functional MRI using principal component analysis
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Singh, Manbir; Don, Manuel
1997-05-01
The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.
Ammonia Analysis by Gas Chromatograph/Infrared Detector (GC/IRD)
NASA Technical Reports Server (NTRS)
Scott, Joseph P.; Whitfield, Steve W.
2003-01-01
Methods are being developed at Marshall Space Flight Center's Toxicity Lab on a CG/IRD System that will be used to detect ammonia in low part per million (ppm) levels. These methods will allow analysis of gas samples by syringe injections. The GC is equipped with a unique cryogenic-cooled inlet system that will enable our lab to make large injections of a gas sample. Although the initial focus of the work will be analysis of ammonia, this instrument could identify other compounds on a molecular level. If proper methods can be developed, the IRD could work as a powerful addition to our offgassing capabilities.
NASA Technical Reports Server (NTRS)
Townsend, J.; Meyers, C.; Ortega, R.; Peck, J.; Rheinfurth, M.; Weinstock, B.
1993-01-01
Probabilistic structural analyses and design methods are steadily gaining acceptance within the aerospace industry. The safety factor approach to design has long been the industry standard, and it is believed by many to be overly conservative and thus, costly. A probabilistic approach to design may offer substantial cost savings. This report summarizes several probabilistic approaches: the probabilistic failure analysis (PFA) methodology developed by Jet Propulsion Laboratory, fast probability integration (FPI) methods, the NESSUS finite element code, and response surface methods. Example problems are provided to help identify the advantages and disadvantages of each method.
Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.
2010-01-01
Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217
Nilsson, Gunnar; Zary, Nabil
2014-01-01
Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research. PMID:25469323
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2014-01-01
Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research.
ERIC Educational Resources Information Center
DiStefano, Christine; Kamphaus, R. W.
2006-01-01
Two classification methods, latent class cluster analysis and cluster analysis, are used to identify groups of child behavioral adjustment underlying a sample of elementary school children aged 6 to 11 years. Behavioral rating information across 14 subscales was obtained from classroom teachers and used as input for analyses. Both the procedures…
NASA Astrophysics Data System (ADS)
Ahn, Jae-Jun; Akram, Kashif; Shahbaz, Hafiz Muhammad; Kwon, Joong-Ho
2014-12-01
Frozen fish fillets (walleye Pollack and Japanese Spanish mackerel) were selected as samples for irradiation (0-10 kGy) detection trials using different hydrolysis methods. Photostimulated luminescence (PSL)-based screening analysis for gamma-irradiated frozen fillets showed low sensitivity due to limited silicate mineral contents on the samples. Same limitations were found in the thermoluminescence (TL) analysis on mineral samples isolated by density separation method. However, acid (HCl) and alkali (KOH) hydrolysis methods were effective in getting enough minerals to carry out TL analysis, which was reconfirmed through the normalization step by calculating the TL ratios (TL1/TL2). For improved electron spin resonance (ESR) analysis, alkali and enzyme (alcalase) hydrolysis methods were compared in separating minute-bone fractions. The enzymatic method provided more clear radiation-specific hydroxyapatite radicals than that of the alkaline method. Different hydrolysis methods could extend the application of TL and ESR techniques in identifying the irradiation history of frozen fish fillets.
NASA Astrophysics Data System (ADS)
Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan
In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Reddy, Sreekanth P; Britto, Ramona; Vinnakota, Katyayni; Aparna, Hebbar; Sreepathi, Hari Kishore; Thota, Balaram; Kumari, Arpana; Shilpa, B M; Vrinda, M; Umesh, Srikantha; Samuel, Cini; Shetty, Mitesh; Tandon, Ashwani; Pandey, Paritosh; Hegde, Sridevi; Hegde, A S; Balasubramaniam, Anandh; Chandramouli, B A; Santosh, Vani; Kondaiah, Paturu; Somasundaram, Kumaravel; Rao, M R Satyanarayana
2008-05-15
Current methods of classification of astrocytoma based on histopathologic methods are often subjective and less accurate. Although patients with glioblastoma have grave prognosis, significant variability in patient outcome is observed. Therefore, the aim of this study was to identify glioblastoma diagnostic and prognostic markers through microarray analysis. We carried out transcriptome analysis of 25 diffusely infiltrating astrocytoma samples [WHO grade II--diffuse astrocytoma, grade III--anaplastic astrocytoma, and grade IV--glioblastoma (GBM)] using cDNA microarrays containing 18,981 genes. Several of the markers identified were also validated by real-time reverse transcription quantitative PCR and immunohistochemical analysis on an independent set of tumor samples (n = 100). Survival analysis was carried out for two markers on another independent set of retrospective cases (n = 51). We identified several differentially regulated grade-specific genes. Independent validation by real-time reverse transcription quantitative PCR analysis found growth arrest and DNA-damage-inducible alpha (GADD45alpha) and follistatin-like 1 (FSTL1) to be up-regulated in most GBMs (both primary and secondary), whereas superoxide dismutase 2 and adipocyte enhancer binding protein 1 were up-regulated in the majority of primary GBM. Further, identification of the grade-specific expression of GADD45alpha and FSTL1 by immunohistochemical staining reinforced our findings. Analysis of retrospective GBM cases with known survival data revealed that cytoplasmic overexpression of GADD45alpha conferred better survival while the coexpression of FSTL1 with p53 was associated with poor survival. Our study reveals that GADD45alpha and FSTLI are GBM-specific whereas superoxide dismutase 2 and adipocyte enhancer binding protein 1 are primary GBM-specific diagnostic markers. Whereas GADD45alpha overexpression confers a favorable prognosis, FSTL1 overexpression is a hallmark of poor prognosis in GBM patients.
Liang, Xianrui; Zhao, Cui; Su, Weike
2015-11-01
An ultra-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry method integrating multi-constituent determination and fingerprint analysis has been established for quality assessment and control of Scutellaria indica L. The optimized method possesses the advantages of speediness, efficiency, and allows multi-constituents determination and fingerprint analysis in one chromatographic run within 11 min. 36 compounds were detected, and 23 of them were unequivocally identified or tentatively assigned. The established fingerprint method was applied to the analysis of ten S. indica samples from different geographic locations. The quality assessment was achieved by using principal component analysis. The proposed method is useful and reliable for the characterization of multi-constituents in a complex chemical system and the overall quality assessment of S. indica. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Suratanee, Apichat; Plaimas, Kitiporn
2017-01-01
The associations between proteins and diseases are crucial information for investigating pathological mechanisms. However, the number of known and reliable protein-disease associations is quite small. In this study, an analysis framework to infer associations between proteins and diseases was developed based on a large data set of a human protein-protein interaction network integrating an effective network search, namely, the reverse k -nearest neighbor (R k NN) search. The R k NN search was used to identify an impact of a protein on other proteins. Then, associations between proteins and diseases were inferred statistically. The method using the R k NN search yielded a much higher precision than a random selection, standard nearest neighbor search, or when applying the method to a random protein-protein interaction network. All protein-disease pair candidates were verified by a literature search. Supporting evidence for 596 pairs was identified. In addition, cluster analysis of these candidates revealed 10 promising groups of diseases to be further investigated experimentally. This method can be used to identify novel associations to better understand complex relationships between proteins and diseases.
Analysis of ligand-protein exchange by Clustering of Ligand Diffusion Coefficient Pairs (CoLD-CoP)
NASA Astrophysics Data System (ADS)
Snyder, David A.; Chantova, Mihaela; Chaudhry, Saadia
2015-06-01
NMR spectroscopy is a powerful tool in describing protein structures and protein activity for pharmaceutical and biochemical development. This study describes a method to determine weak binding ligands in biological systems by using hierarchic diffusion coefficient clustering of multidimensional data obtained with a 400 MHz Bruker NMR. Comparison of DOSY spectrums of ligands of the chemical library in the presence and absence of target proteins show translational diffusion rates for small molecules upon interaction with macromolecules. For weak binders such as compounds found in fragment libraries, changes in diffusion rates upon macromolecular binding are on the order of the precision of DOSY diffusion measurements, and identifying such subtle shifts in diffusion requires careful statistical analysis. The "CoLD-CoP" (Clustering of Ligand Diffusion Coefficient Pairs) method presented here uses SAHN clustering to identify protein-binders in a chemical library or even a not fully characterized metabolite mixture. We will show how DOSY NMR and the "CoLD-CoP" method complement each other in identifying the most suitable candidates for lysozyme and wheat germ acid phosphatase.
Villa, C A; Finlayson, S; Limpus, C; Gaus, C
2015-04-15
Biomonitoring of blood is commonly used to identify and quantify occupational or environmental exposure to chemical contaminants. Increasingly, this technique has been applied to wildlife contaminant monitoring, including for green turtles, allowing for the non-lethal evaluation of chemical exposure in their nearshore environment. The sources, composition, bioavailability and toxicity of metals in the marine environment are, however, often unknown and influenced by numerous biotic and abiotic factors. These factors can vary considerably across time and space making the selection of the most informative elements for biomonitoring challenging. This study aimed to validate an ICP-MS multi-element screening method for green turtle blood in order to identify and facilitate prioritisation of target metals for subsequent fully quantitative analysis. Multi-element screening provided semiquantitative results for 70 elements, 28 of which were also determined through fully quantitative analysis. Of the 28 comparable elements, 23 of the semiquantitative results had an accuracy between 67% and 112% relative to the fully quantified values. In lieu of any available turtle certified reference materials (CRMs), we evaluated the use of human blood CRMs as a matrix surrogate for quality control, and compared two commonly used sample preparation methods for matrix related effects. The results demonstrate that human blood provides an appropriate matrix for use as a quality control material in the fully quantitative analysis of metals in turtle blood. An example for the application of this screening method is provided by comparing screening results from blood of green turtles foraging in an urban and rural region in Queensland, Australia. Potential targets for future metal biomonitoring in these regions were identified by this approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Smith, P A; Son, P S; Callaghan, P M; Jederberg, W W; Kuhlmann, K; Still, K R
1996-07-17
Components of colophony (rosin) resin acids are sensitizers through dermal and pulmonary exposure to heated and unheated material. Significant work in the literature identifies specific resin acids and their oxidation products as sensitizers. Pulmonary exposure to colophony sensitizers has been estimated indirectly through formaldehyde exposure. To assess pulmonary sensitization from airborne resin acids, direct measurement is desired, as the degree to which aldehyde exposure correlates with that of resin acids during colophony heating is undefined. Any analytical method proposed should be applicable to a range of compounds and should also identify specific compounds present in a breathing zone sample. This work adapts OSHA Sampling and Analytical Method 58, which is designed to provide airborne concentration data for coal tar pitch volatile solids by air filtration through a glass fiber filter, solvent extraction of the filter, and gravimetric analysis of the non-volatile extract residue. In addition to data regarding total soluble material captured, a portion of the extract may be subjected to compound-specific analysis. Levels of soluble solids found during personal breathing zone sampling during electronics soldering in a Naval Aviation Depot ranged from below the "reliable quantitation limit" reported in the method to 7.98 mg/m3. Colophony-spiked filters analyzed in accordance with the method (modified) produced a limit of detection for total solvent-soluble colophony solids of 10 micrograms/filter. High performance liquid chromatography was used to identify abietic acid present in a breathing zone sample.
Quasispecies Analyses of the HIV-1 Near-full-length Genome With Illumina MiSeq
Ode, Hirotaka; Matsuda, Masakazu; Matsuoka, Kazuhiro; Hachiya, Atsuko; Hattori, Junko; Kito, Yumiko; Yokomaku, Yoshiyuki; Iwatani, Yasumasa; Sugiura, Wataru
2015-01-01
Human immunodeficiency virus type-1 (HIV-1) exhibits high between-host genetic diversity and within-host heterogeneity, recognized as quasispecies. Because HIV-1 quasispecies fluctuate in terms of multiple factors, such as antiretroviral exposure and host immunity, analyzing the HIV-1 genome is critical for selecting effective antiretroviral therapy and understanding within-host viral coevolution mechanisms. Here, to obtain HIV-1 genome sequence information that includes minority variants, we sought to develop a method for evaluating quasispecies throughout the HIV-1 near-full-length genome using the Illumina MiSeq benchtop deep sequencer. To ensure the reliability of minority mutation detection, we applied an analysis method of sequence read mapping onto a consensus sequence derived from de novo assembly followed by iterative mapping and subsequent unique error correction. Deep sequencing analyses of aHIV-1 clone showed that the analysis method reduced erroneous base prevalence below 1% in each sequence position and discarded only < 1% of all collected nucleotides, maximizing the usage of the collected genome sequences. Further, we designed primer sets to amplify the HIV-1 near-full-length genome from clinical plasma samples. Deep sequencing of 92 samples in combination with the primer sets and our analysis method provided sufficient coverage to identify >1%-frequency sequences throughout the genome. When we evaluated sequences of pol genes from 18 treatment-naïve patients' samples, the deep sequencing results were in agreement with Sanger sequencing and identified numerous additional minority mutations. The results suggest that our deep sequencing method would be suitable for identifying within-host viral population dynamics throughout the genome. PMID:26617593
NASA Astrophysics Data System (ADS)
Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann
2017-07-01
Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.
Chen, Neng; Tranebjærg, Lisbeth; Rendtorff, Nanna Dahl; Schrijver, Iris
2011-01-01
Pendred syndrome and DFNB4 (autosomal recessive nonsyndromic congenital deafness, locus 4) are associated with autosomal recessive congenital sensorineural hearing loss and mutations in the SLC26A4 gene. Extensive allelic heterogeneity, however, necessitates analysis of all exons and splice sites to identify mutations for individual patients. Although Sanger sequencing is the gold standard for mutation detection, screening methods supplemented with targeted sequencing can provide a cost-effective alternative. One such method, denaturing high-performance liquid chromatography, was developed for clinical mutation detection in SLC26A4. However, this method inherently cannot distinguish homozygous changes from wild-type sequences. High-resolution melting (HRM), on the other hand, can detect heterozygous and homozygous changes cost-effectively, without any post-PCR modifications. We developed a closed-tube HRM mutation detection method specific for SLC26A4 that can be used in the clinical diagnostic setting. Twenty-eight primer pairs were designed to cover all 21 SLC26A4 exons and splice junction sequences. Using the resulting amplicons, initial HRM analysis detected all 45 variants previously identified by sequencing. Subsequently, a 384-well plate format was designed for up to three patient samples per run. Blinded HRM testing on these plates of patient samples collected over 1 year in a clinical diagnostic laboratory accurately detected all variants identified by sequencing. In conclusion, HRM with targeted sequencing is a reliable, simple, and cost-effective method for SLC26A4 mutation screening and detection. PMID:21704276
Rodriguez-Sabate, Clara; Morales, Ingrid; Sanchez, Alberto; Rodriguez, Manuel
2017-01-01
The complexity of basal ganglia (BG) interactions is often condensed into simple models mainly based on animal data and that present BG in closed-loop cortico-subcortical circuits of excitatory/inhibitory pathways which analyze the incoming cortical data and return the processed information to the cortex. This study was aimed at identifying functional relationships in the BG motor-loop of 24 healthy-subjects who provided written, informed consent and whose BOLD-activity was recorded by MRI methods. The analysis of the functional interaction between these centers by correlation techniques and multiple linear regression showed non-linear relationships which cannot be suitably addressed with these methods. The multiple correspondence analysis (MCA), an unsupervised multivariable procedure which can identify non-linear interactions, was used to study the functional connectivity of BG when subjects were at rest. Linear methods showed different functional interactions expected according to current BG models. MCA showed additional functional interactions which were not evident when using lineal methods. Seven functional configurations of BG were identified with MCA, two involving the primary motor and somatosensory cortex, one involving the deepest BG (external-internal globus pallidum, subthalamic nucleus and substantia nigral), one with the input-output BG centers (putamen and motor thalamus), two linking the input-output centers with other BG (external pallidum and subthalamic nucleus), and one linking the external pallidum and the substantia nigral. The results provide evidence that the non-linear MCA and linear methods are complementary and should be best used in conjunction to more fully understand the nature of functional connectivity of brain centers.
Buchanan, R; Ball, D; Dolphin, H; Dave, J
2016-09-01
Matrix-assisted laser desorption-ionization time-of-flight mass spectrometry (MALDI-TOF MS) was compared with the API NH biochemical method for the identification of Neisseria gonorrhoeae in routine clinical samples. A retrospective review of laboratory records for 1090 isolates for which both biochemical and MALDI-TOF MS identifications were available was performed. Cases of discrepant results were examined in detail for evidence supportive of a particular organism identification. Of 1090 isolates, 1082 were identified as N. gonorrhoeae by API NH. MALDI-TOF MS successfully identified 984 (91%) of these after one analysis, rising to 1081 (99.9%) after two analyses, with a positive predictive value of 99.3%. For those isolates requiring a repeat analysis, failure to generate an identifiable proteomic signature was the reason in 76% of cases, with alternative initial identifications accounting for the remaining 24%. MALDI-TOF MS identified eight isolates as N. gonorrhoeae that were not identified as such by API NH-examination of these discrepant results suggested that the MALDI-TOF MS identification may be the more reliable. MALDI-TOF MS is at least as accurate and reliable a method of identifying N. gonorrhoeae as API NH. We propose that MALDI-TOF MS could potentially be used as a single method for N. gonorrhoeae identification in routine cases by laboratories with access to this technology. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Slope Stability Analysis of Waste Dump in Sandstone Open Pit Osielec
NASA Astrophysics Data System (ADS)
Adamczyk, Justyna; Cała, Marek; Flisiak, Jerzy; Kolano, Malwina; Kowalski, Michał
2013-03-01
This paper presents the slope stability analysis for the current as well as projected (final) geometry of waste dump Sandstone Open Pit "Osielec". For the stability analysis six sections were selected. Then, the final geometry of the waste dump was designed and the stability analysis was conducted. On the basis of the analysis results the opportunities to improve the stability of the object were identified. The next issue addressed in the paper was to determine the proportion of the mixture containing mining and processing wastes, for which the waste dump remains stable. Stability calculations were carried out using Janbu method, which belongs to the limit equilibrium methods.
Brown Connolly, Nancy E
2014-12-01
This foundational study applies the process of receiver operating characteristic (ROC) analysis to evaluate utility and predictive value of a disease management (DM) model that uses RM devices for chronic obstructive pulmonary disease (COPD). The literature identifies a need for a more rigorous method to validate and quantify evidence-based value for remote monitoring (RM) systems being used to monitor persons with a chronic disease. ROC analysis is an engineering approach widely applied in medical testing, but that has not been evaluated for its utility in RM. Classifiers (saturated peripheral oxygen [SPO2], blood pressure [BP], and pulse), optimum threshold, and predictive accuracy are evaluated based on patient outcomes. Parametric and nonparametric methods were used. Event-based patient outcomes included inpatient hospitalization, accident and emergency, and home health visits. Statistical analysis tools included Microsoft (Redmond, WA) Excel(®) and MedCalc(®) (MedCalc Software, Ostend, Belgium) version 12 © 1993-2013 to generate ROC curves and statistics. Persons with COPD were monitored a minimum of 183 days, with at least one inpatient hospitalization within 12 months prior to monitoring. Retrospective, de-identified patient data from a United Kingdom National Health System COPD program were used. Datasets included biometric readings, alerts, and resource utilization. SPO2 was identified as a predictive classifier, with an optimal average threshold setting of 85-86%. BP and pulse were failed classifiers, and areas of design were identified that may improve utility and predictive capacity. Cost avoidance methodology was developed. RESULTS can be applied to health services planning decisions. Methods can be applied to system design and evaluation based on patient outcomes. This study validated the use of ROC in RM program evaluation.
Maury, Augusto; Revilla, Reynier I
2015-08-01
Cosmic rays (CRs) occasionally affect charge-coupled device (CCD) detectors, introducing large spikes with very narrow bandwidth in the spectrum. These CR features can distort the chemical information expressed by the spectra. Consequently, we propose here an algorithm to identify and remove significant spikes in a single Raman spectrum. An autocorrelation analysis is first carried out to accentuate the CRs feature as outliers. Subsequently, with an adequate selection of the threshold, a discrete wavelet transform filter is used to identify CR spikes. Identified data points are then replaced by interpolated values using the weighted-average interpolation technique. This approach only modifies the data in a close vicinity of the CRs. Additionally, robust wavelet transform parameters are proposed (a desirable property for automation) after optimizing them with the application of the method in a great number of spectra. However, this algorithm, as well as all the single-spectrum analysis procedures, is limited to the cases in which CRs have much narrower bandwidth than the Raman bands. This might not be the case when low-resolution Raman instruments are used.
Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru
2017-01-01
Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted "bidirectional analysis," which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples.
Masuyama, Kotoka; Shojo, Hideki; Nakanishi, Hiroaki; Inokuchi, Shota; Adachi, Noboru
2017-01-01
Sex determination is important in archeology and anthropology for the study of past societies, cultures, and human activities. Sex determination is also one of the most important components of individual identification in criminal investigations. We developed a new method of sex determination by detecting a single-nucleotide polymorphism in the amelogenin gene using amplified product-length polymorphisms in combination with sex-determining region Y analysis. We particularly focused on the most common types of postmortem DNA damage in ancient and forensic samples: fragmentation and nucleotide modification resulting from deamination. Amplicon size was designed to be less than 60 bp to make the method more useful for analyzing degraded DNA samples. All DNA samples collected from eight Japanese individuals (four male, four female) were evaluated correctly using our method. The detection limit for accurate sex determination was determined to be 20 pg of DNA. We compared our new method with commercial short tandem repeat analysis kits using DNA samples artificially fragmented by ultraviolet irradiation. Our novel method was the most robust for highly fragmented DNA samples. To deal with allelic dropout resulting from deamination, we adopted “bidirectional analysis,” which analyzed samples from both sense and antisense strands. This new method was applied to 14 Jomon individuals (3500-year-old bone samples) whose sex had been identified morphologically. We could correctly identify the sex of 11 out of 14 individuals. These results show that our method is reliable for the sex determination of highly degenerated samples. PMID:28052096
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
Claus, Rainer; Lucas, David M.; Stilgenbauer, Stephan; Ruppert, Amy S.; Yu, Lianbo; Zucknick, Manuela; Mertens, Daniel; Bühler, Andreas; Oakes, Christopher C.; Larson, Richard A.; Kay, Neil E.; Jelinek, Diane F.; Kipps, Thomas J.; Rassenti, Laura Z.; Gribben, John G.; Döhner, Hartmut; Heerema, Nyla A.; Marcucci, Guido; Plass, Christoph; Byrd, John C.
2012-01-01
Purpose Increased ZAP-70 expression predicts poor prognosis in chronic lymphocytic leukemia (CLL). Current methods for accurately measuring ZAP-70 expression are problematic, preventing widespread application of these tests in clinical decision making. We therefore used comprehensive DNA methylation profiling of the ZAP-70 regulatory region to identify sites important for transcriptional control. Patients and Methods High-resolution quantitative DNA methylation analysis of the entire ZAP-70 gene regulatory regions was conducted on 247 samples from patients with CLL from four independent clinical studies. Results Through this comprehensive analysis, we identified a small area in the 5′ regulatory region of ZAP-70 that showed large variability in methylation in CLL samples but was universally methylated in normal B cells. High correlation with mRNA and protein expression, as well as activity in promoter reporter assays, revealed that within this differentially methylated region, a single CpG dinucleotide and neighboring nucleotides are particularly important in ZAP-70 transcriptional regulation. Furthermore, by using clustering approaches, we identified a prognostic role for this site in four independent data sets of patients with CLL using time to treatment, progression-free survival, and overall survival as clinical end points. Conclusion Comprehensive quantitative DNA methylation analysis of the ZAP-70 gene in CLL identified important regions responsible for transcriptional regulation. In addition, loss of methylation at a specific single CpG dinucleotide in the ZAP-70 5′ regulatory sequence is a highly predictive and reproducible biomarker of poor prognosis in this disease. This work demonstrates the feasibility of using quantitative specific ZAP-70 methylation analysis as a relevant clinically applicable prognostic test in CLL. PMID:22564988
Brasier, Allan R; Victor, Sundar; Boetticher, Gary; Ju, Hyunsu; Lee, Chang; Bleecker, Eugene R; Castro, Mario; Busse, William W; Calhoun, William J
2008-01-01
Asthma is a heterogeneous clinical disorder. Methods for objective identification of disease subtypes will focus on clinical interventions and help identify causative pathways. Few studies have explored phenotypes at a molecular level. We sought to discriminate asthma phenotypes on the basis of cytokine profiles in bronchoalveolar lavage (BAL) samples from patients with mild-moderate and severe asthma. Twenty-five cytokines were measured in BAL samples of 84 patients (41 severe, 43 mild-moderate) using bead-based multiplex immunoassays. The normalized data were subjected to statistical and informatics analysis. Four groups of asthmatic profiles could be identified on the basis of unsupervised analysis (hierarchical clustering) that were independent of treatment. One group, enriched in patients with severe asthma, showed differences in BAL cellular content, reductions in baseline pulmonary function, and enhanced response to methacholine provocation. Ten cytokines were identified that accurately predicted this group. Classification methods for predicting methacholine sensitivity were developed. The best model analysis predicted hyperresponders with 88% accuracy in 10 trials by using a 10-fold cross-validation. The cytokines that contributed to this model were IL-2, IL-4, and IL-5. On the basis of this classifier, 3 distinct hyperresponder classes were identified that varied in BAL eosinophil count and PC20 methacholine. Cytokine expression patterns in BAL can be used to identify distinct types of asthma and identify distinct subsets of methacholine hyperresponders. Further biomarker discovery in BAL may be informative.
Using qualitative comparative analysis in a systematic review of a complex intervention.
Kahwati, Leila; Jacobs, Sara; Kane, Heather; Lewis, Megan; Viswanathan, Meera; Golin, Carol E
2016-05-04
Systematic reviews evaluating complex interventions often encounter substantial clinical heterogeneity in intervention components and implementation features making synthesis challenging. Qualitative comparative analysis (QCA) is a non-probabilistic method that uses mathematical set theory to study complex phenomena; it has been proposed as a potential method to complement traditional evidence synthesis in reviews of complex interventions to identify key intervention components or implementation features that might explain effectiveness or ineffectiveness. The objective of this study was to describe our approach in detail and examine the suitability of using QCA within the context of a systematic review. We used data from a completed systematic review of behavioral interventions to improve medication adherence to conduct two substantive analyses using QCA. The first analysis sought to identify combinations of nine behavior change techniques/components (BCTs) found among effective interventions, and the second analysis sought to identify combinations of five implementation features (e.g., agent, target, mode, time span, exposure) found among effective interventions. For each substantive analysis, we reframed the review's research questions to be designed for use with QCA, calibrated sets (i.e., transformed raw data into data used in analysis), and identified the necessary and/or sufficient combinations of BCTs and implementation features found in effective interventions. Our application of QCA for each substantive analysis is described in detail. We extended the original review findings by identifying seven combinations of BCTs and four combinations of implementation features that were sufficient for improving adherence. We found reasonable alignment between several systematic review steps and processes used in QCA except that typical approaches to study abstraction for some intervention components and features did not support a robust calibration for QCA. QCA was suitable for use within a systematic review of medication adherence interventions and offered insights beyond the single dimension stratifications used in the original completed review. Future prospective use of QCA during a review is needed to determine the optimal way to efficiently integrate QCA into existing approaches to evidence synthesis of complex interventions.
Validation of MIMGO: a method to identify differentially expressed GO terms in a microarray dataset
2012-01-01
Background We previously proposed an algorithm for the identification of GO terms that commonly annotate genes whose expression is upregulated or downregulated in some microarray data compared with in other microarray data. We call these “differentially expressed GO terms” and have named the algorithm “matrix-assisted identification method of differentially expressed GO terms” (MIMGO). MIMGO can also identify microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. However, MIMGO has not yet been validated on a real microarray dataset using all available GO terms. Findings We combined Gene Set Enrichment Analysis (GSEA) with MIMGO to identify differentially expressed GO terms in a yeast cell cycle microarray dataset. GSEA followed by MIMGO (GSEA + MIMGO) correctly identified (p < 0.05) microarray data in which genes annotated to differentially expressed GO terms are upregulated. We found that GSEA + MIMGO was slightly less effective than, or comparable to, GSEA (Pearson), a method that uses Pearson’s correlation as a metric, at detecting true differentially expressed GO terms. However, unlike other methods including GSEA (Pearson), GSEA + MIMGO can comprehensively identify the microarray data in which genes annotated with a differentially expressed GO term are upregulated or downregulated. Conclusions MIMGO is a reliable method to identify differentially expressed GO terms comprehensively. PMID:23232071
Mohkam, Milad; Nezafat, Navid; Berenjian, Aydin; Mobasher, Mohammad Ali; Ghasemi, Younes
2016-03-01
Some Bacillus species, especially Bacillus subtilis and Bacillus pumilus groups, have highly similar 16S rRNA gene sequences, which are hard to identify based on 16S rDNA sequence analysis. To conquer this drawback, rpoB, recA sequence analysis along with randomly amplified polymorphic (RAPD) fingerprinting was examined as an alternative method for differentiating Bacillus species. The 16S rRNA, rpoB and recA genes were amplified via a polymerase chain reaction using their specific primers. The resulted PCR amplicons were sequenced, and phylogenetic analysis was employed by MEGA 6 software. Identification based on 16S rRNA gene sequencing was underpinned by rpoB and recA gene sequencing as well as RAPD-PCR technique. Subsequently, concatenation and phylogenetic analysis showed that extent of diversity and similarity were better obtained by rpoB and recA primers, which are also reinforced by RAPD-PCR methods. However, in one case, these approaches failed to identify one isolate, which in combination with the phenotypical method offsets this issue. Overall, RAPD fingerprinting, rpoB and recA along with concatenated genes sequence analysis discriminated closely related Bacillus species, which highlights the significance of the multigenic method in more precisely distinguishing Bacillus strains. This research emphasizes the benefit of RAPD fingerprinting, rpoB and recA sequence analysis superior to 16S rRNA gene sequence analysis for suitable and effective identification of Bacillus species as recommended for probiotic products.
Horsch, Salome; Kopczynski, Dominik; Kuthe, Elias; Baumbach, Jörg Ingo; Rahmann, Sven
2017-01-01
Motivation Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column—ion mobility spectrometry (MCC-IMS) is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process. Method We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios. Results The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i) SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions) and LM (Local Maxima) for automated peak identification, (ii) EM clustering (Expectation Maximization) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) for the clustering step and (iii) RF (Random Forest) for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology. PMID:28910313
NASA Astrophysics Data System (ADS)
Thompson, N. A.; Ruck, H. W.
1984-04-01
The Air Force is interested in identifying potentially hazardous tasks and prevention of accidents. This effort proposes four methods for determining safety training priorities for job tasks in three enlisted specialties. These methods can be used to design training aimed at avoiding loss of people, time, materials, and money associated with on-the-job accidents. Job tasks performed by airmen were measured using task and job factor ratings. Combining accident reports and job inventories, subject-matter experts identified tasks associated with accidents over a 3-year period. Applying correlational, multiple regression, and cost-benefit analysis, four methods were developed for ordering hazardous tasks to determine safety training priorities.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.
Rosen, M. A.; Sampson, J. B.; Jackson, E. V.; Koka, R.; Chima, A. M.; Ogbuagu, O. U.; Marx, M. K.; Koroma, M.; Lee, B. H.
2014-01-01
Background Anaesthesia care in developed countries involves sophisticated technology and experienced providers. However, advanced machines may be inoperable or fail frequently when placed into the austere medical environment of a developing country. Failure mode and effects analysis (FMEA) is a method for engaging local staff in identifying real or potential breakdowns in processes or work systems and to develop strategies to mitigate risks. Methods Nurse anaesthetists from the two tertiary care hospitals in Freetown, Sierra Leone, participated in three sessions moderated by a human factors specialist and an anaesthesiologist. Sessions were audio recorded, and group discussion graphically mapped by the session facilitator for analysis and commentary. These sessions sought to identify potential barriers to implementing an anaesthesia machine designed for austere medical environments—the universal anaesthesia machine (UAM)—and also engaging local nurse anaesthetists in identifying potential solutions to these barriers. Results Participating Sierra Leonean clinicians identified five main categories of failure modes (resource availability, environmental issues, staff knowledge and attitudes, and workload and staffing issues) and four categories of mitigation strategies (resource management plans, engaging and educating stakeholders, peer support for new machine use, and collectively advocating for needed resources). Conclusions We identified factors that may limit the impact of a UAM and devised likely effective strategies for mitigating those risks. PMID:24833727
Human factors process failure modes and effects analysis (HF PFMEA) software tool
NASA Technical Reports Server (NTRS)
Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)
2011-01-01
Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Kelly Porter
Key goals towards national biosecurity include methods for analyzing pathogens, predicting their emergence, and developing countermeasures. These goals are served by studying bacterial genes that promote pathogenicity and the pathogenicity islands that mobilize them. Cyberinfrastructure promoting an island database advances this field and enables deeper bioinformatic analysis that may identify novel pathogenicity genes. New automated methods and rich visualizations were developed for identifying pathogenicity islands, based on the principle that islands occur sporadically among closely related strains. The chromosomally-ordered pan-genome organizes all genes from a clade of strains; gaps in this visualization indicate islands, and decorations of the gene matrixmore » facilitate exploration of island gene functions. A %E2%80%9Clearned phyloblocks%E2%80%9D method was developed for automated island identification, that trains on the phylogenetic patterns of islands identified by other methods. Learned phyloblocks better defined termini of previously identified islands in multidrug-resistant Klebsiella pneumoniae ATCC BAA-2146, and found its only antibiotic resistance island.« less
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.
Assessment of stem cell differentiation based on genome-wide expression profiles.
Godoy, Patricio; Schmidt-Heck, Wolfgang; Hellwig, Birte; Nell, Patrick; Feuerborn, David; Rahnenführer, Jörg; Kattler, Kathrin; Walter, Jörn; Blüthgen, Nils; Hengstler, Jan G
2018-07-05
In recent years, protocols have been established to differentiate stem and precursor cells into more mature cell types. However, progress in this field has been hampered by difficulties to assess the differentiation status of stem cell-derived cells in an unbiased manner. Here, we present an analysis pipeline based on published data and methods to quantify the degree of differentiation and to identify transcriptional control factors explaining differences from the intended target cells or tissues. The pipeline requires RNA-Seq or gene array data of the stem cell starting population, derived 'mature' cells and primary target cells or tissue. It consists of a principal component analysis to represent global expression changes and to identify possible problems of the dataset that require special attention, such as: batch effects; clustering techniques to identify gene groups with similar features; over-representation analysis to characterize biological motifs and transcriptional control factors of the identified gene clusters; and metagenes as well as gene regulatory networks for quantitative cell-type assessment and identification of influential transcription factors. Possibilities and limitations of the analysis pipeline are illustrated using the example of human embryonic stem cell and human induced pluripotent cells to generate 'hepatocyte-like cells'. The pipeline quantifies the degree of incomplete differentiation as well as remaining stemness and identifies unwanted features, such as colon- and fibroblast-associated gene clusters that are absent in real hepatocytes but typically induced by currently available differentiation protocols. Finally, transcription factors responsible for incomplete and unwanted differentiation are identified. The proposed method is widely applicable and allows an unbiased and quantitative assessment of stem cell-derived cells.This article is part of the theme issue 'Designer human tissue: coming to a lab near you'. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
NASA Astrophysics Data System (ADS)
Farsadnia, Farhad; Ghahreman, Bijan
2016-04-01
Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.
Fasihi, Yasser; Fooladi, Saba; Mohammadi, Mohammad Ali; Emaneini, Mohammad; Kalantar-Neyestanaki, Davood
2017-09-06
Molecular typing is an important tool for control and prevention of infection. A suitable molecular typing method for epidemiological investigation must be easy to perform, highly reproducible, inexpensive, rapid and easy to interpret. In this study, two molecular typing methods including the conventional PCR-sequencing method and high resolution melting (HRM) analysis were used for staphylococcal protein A (spa) typing of 30 Methicillin-resistant Staphylococcus aureus (MRSA) isolates recovered from clinical samples. Based on PCR-sequencing method results, 16 different spa types were identified among the 30 MRSA isolates. Among the 16 different spa types, 14 spa types separated by HRM method. Two spa types including t4718 and t2894 were not separated from each other. According to our results, spa typing based on HRM analysis method is very rapid, easy to perform and cost-effective, but this method must be standardized for different regions, spa types, and real-time machinery.
Catto, James W F; Abbod, Maysam F; Wild, Peter J; Linkens, Derek A; Pilarsky, Christian; Rehman, Ishtiaq; Rosario, Derek J; Denzinger, Stefan; Burger, Maximilian; Stoehr, Robert; Knuechel, Ruth; Hartmann, Arndt; Hamdy, Freddie C
2010-03-01
New methods for identifying bladder cancer (BCa) progression are required. Gene expression microarrays can reveal insights into disease biology and identify novel biomarkers. However, these experiments produce large datasets that are difficult to interpret. To develop a novel method of microarray analysis combining two forms of artificial intelligence (AI): neurofuzzy modelling (NFM) and artificial neural networks (ANN) and validate it in a BCa cohort. We used AI and statistical analyses to identify progression-related genes in a microarray dataset (n=66 tumours, n=2800 genes). The AI-selected genes were then investigated in a second cohort (n=262 tumours) using immunohistochemistry. We compared the accuracy of AI and statistical approaches to identify tumour progression. AI identified 11 progression-associated genes (odds ratio [OR]: 0.70; 95% confidence interval [CI], 0.56-0.87; p=0.0004), and these were more discriminate than genes chosen using statistical analyses (OR: 1.24; 95% CI, 0.96-1.60; p=0.09). The expression of six AI-selected genes (LIG3, FAS, KRT18, ICAM1, DSG2, and BRCA2) was determined using commercial antibodies and successfully identified tumour progression (concordance index: 0.66; log-rank test: p=0.01). AI-selected genes were more discriminate than pathologic criteria at determining progression (Cox multivariate analysis: p=0.01). Limitations include the use of statistical correlation to identify 200 genes for AI analysis and that we did not compare regression identified genes with immunohistochemistry. AI and statistical analyses use different techniques of inference to determine gene-phenotype associations and identify distinct prognostic gene signatures that are equally valid. We have identified a prognostic gene signature whose members reflect a variety of carcinogenic pathways that could identify progression in non-muscle-invasive BCa. 2009 European Association of Urology. Published by Elsevier B.V. All rights reserved.
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki
2015-08-01
A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.
Effects of surface preparation on quality of aluminum alloy weldments
NASA Technical Reports Server (NTRS)
Kizer, D.; Saperstein, Z.
1968-01-01
Study of surface preparations and surface contamination effects on the welding of 2014 aluminum involves several methods of surface analysis to identify surface properties conducive to weld defects. These methods are radioactive evaporation, spectral reflectance mass spectroscopy, gas chromatography and spark emission spectroscopy.
Detection of Genetically Modified Sugarcane by Using Terahertz Spectroscopy and Chemometrics
NASA Astrophysics Data System (ADS)
Liu, J.; Xie, H.; Zha, B.; Ding, W.; Luo, J.; Hu, C.
2018-03-01
A methodology is proposed to identify genetically modified sugarcane from non-genetically modified sugarcane by using terahertz spectroscopy and chemometrics techniques, including linear discriminant analysis (LDA), support vector machine-discriminant analysis (SVM-DA), and partial least squares-discriminant analysis (PLS-DA). The classification rate of the above mentioned methods is compared, and different types of preprocessing are considered. According to the experimental results, the best option is PLS-DA, with an identification rate of 98%. The results indicated that THz spectroscopy and chemometrics techniques are a powerful tool to identify genetically modified and non-genetically modified sugarcane.
Exploratory Mediation Analysis via Regularization
Serang, Sarfaraz; Jacobucci, Ross; Brimhall, Kim C.; Grimm, Kevin J.
2017-01-01
Exploratory mediation analysis refers to a class of methods used to identify a set of potential mediators of a process of interest. Despite its exploratory nature, conventional approaches are rooted in confirmatory traditions, and as such have limitations in exploratory contexts. We propose a two-stage approach called exploratory mediation analysis via regularization (XMed) to better address these concerns. We demonstrate that this approach is able to correctly identify mediators more often than conventional approaches and that its estimates are unbiased. Finally, this approach is illustrated through an empirical example examining the relationship between college acceptance and enrollment. PMID:29225454
NASA Astrophysics Data System (ADS)
Champeimont, Raphaël; Laine, Elodie; Hu, Shuang-Wei; Penin, Francois; Carbone, Alessandra
2016-05-01
A novel computational approach of coevolution analysis allowed us to reconstruct the protein-protein interaction network of the Hepatitis C Virus (HCV) at the residue resolution. For the first time, coevolution analysis of an entire viral genome was realized, based on a limited set of protein sequences with high sequence identity within genotypes. The identified coevolving residues constitute highly relevant predictions of protein-protein interactions for further experimental identification of HCV protein complexes. The method can be used to analyse other viral genomes and to predict the associated protein interaction networks.
A new method to identify the foot of continental slope based on an integrated profile analysis
NASA Astrophysics Data System (ADS)
Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin
2017-06-01
A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1994-01-01
New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.
Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H
2014-08-08
For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.
A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.
Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd
2017-09-01
The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.
Choi, Ted; Eskin, Eleazar
2013-01-01
Gene expression data, in conjunction with information on genetic variants, have enabled studies to identify expression quantitative trait loci (eQTLs) or polymorphic locations in the genome that are associated with expression levels. Moreover, recent technological developments and cost decreases have further enabled studies to collect expression data in multiple tissues. One advantage of multiple tissue datasets is that studies can combine results from different tissues to identify eQTLs more accurately than examining each tissue separately. The idea of aggregating results of multiple tissues is closely related to the idea of meta-analysis which aggregates results of multiple genome-wide association studies to improve the power to detect associations. In principle, meta-analysis methods can be used to combine results from multiple tissues. However, eQTLs may have effects in only a single tissue, in all tissues, or in a subset of tissues with possibly different effect sizes. This heterogeneity in terms of effects across multiple tissues presents a key challenge to detect eQTLs. In this paper, we develop a framework that leverages two popular meta-analysis methods that address effect size heterogeneity to detect eQTLs across multiple tissues. We show by using simulations and multiple tissue data from mouse that our approach detects many eQTLs undetected by traditional eQTL methods. Additionally, our method provides an interpretation framework that accurately predicts whether an eQTL has an effect in a particular tissue. PMID:23785294
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, B. P.; Mew, D. A.; DeHope, A.
Attribution of the origin of an illicit drug relies on identification of compounds indicative of its clandestine production and is a key component of many modern forensic investigations. The results of these studies can yield detailed information on method of manufacture, starting material source, and final product - all critical forensic evidence. In the present work, chemical attribution signatures (CAS) associated with the synthesis of the analgesic fentanyl, N-(1-phenylethylpiperidin-4-yl)-N-phenylpropanamide, were investigated. Six synthesis methods, all previously published fentanyl synthetic routes or hybrid versions thereof, were studied in an effort to identify and classify route-specific signatures. 160 distinct compounds and inorganicmore » species were identified using gas and liquid chromatographies combined with mass spectrometric methods (GC-MS and LCMS/ MS-TOF) in conjunction with inductively coupled plasma mass spectrometry (ICPMS). The complexity of the resultant data matrix urged the use of multivariate statistical analysis. Using partial least squares discriminant analysis (PLS-DA), 87 route-specific CAS were classified and a statistical model capable of predicting the method of fentanyl synthesis was validated and tested against CAS profiles from crude fentanyl products deposited and later extracted from two operationally relevant surfaces: stainless steel and vinyl tile. This work provides the most detailed fentanyl CAS investigation to date by using orthogonal mass spectral data to identify CAS of forensic significance for illicit drug detection, profiling, and attribution.« less
NASA Astrophysics Data System (ADS)
Kowalski, Dariusz
2017-06-01
The paper deals with the method to identify internal stresses in two-dimensional steel members. Steel members were investigated in the delivery stage and after assembly, by means of electric-arc welding. In order to perform the member assessment two methods to identify the stress variation were applied. The first is a non-destructive measurement method employing local external magnetic field and to detecting the induced voltage, including Barkhausen noise The analysis of the latter allows to assess internal stresses in a surface layer of the material. The second method, essential in the paper, is a semi-trepanation Mathar method of tensometric strain variation measurement in the course of a controlled void-making in the material. Variation of internal stress distribution in the material led to the choice of welding technology to join. The assembly process altered the actual stresses and made up new stresses, triggering post-welding stresses as a response for the excessive stress variation.
NASA Technical Reports Server (NTRS)
Noor, A. K.
1983-01-01
Advances in continuum modeling, progress in reduction methods, and analysis and modeling needs for large space structures are covered with specific attention given to repetitive lattice trusses. As far as continuum modeling is concerned, an effective and verified analysis capability exists for linear thermoelastic stress, birfurcation buckling, and free vibration problems of repetitive lattices. However, application of continuum modeling to nonlinear analysis needs more development. Reduction methods are very effective for bifurcation buckling and static (steady-state) nonlinear analysis. However, more work is needed to realize their full potential for nonlinear dynamic and time-dependent problems. As far as analysis and modeling needs are concerned, three areas are identified: loads determination, modeling and nonclassical behavior characteristics, and computational algorithms. The impact of new advances in computer hardware, software, integrated analysis, CAD/CAM stems, and materials technology is also discussed.
NASA Astrophysics Data System (ADS)
Makarova, Yuliya; Sokolov, Sergey; Glukhov, Anton
2014-05-01
The Shamanikha-Stolbovsky gold cluster is located in the North-East of Russia, in the basin of the Kolyma River. In 1933, gold placers were discovered there, but the search for significant gold targets for more than 50 years did not give positive results. In 2009-2011, geochemical and geophysical studies, mining and drilling were conducted within this cluster. Geochemical exploration was carried out in a modification based on superimposed secondary sorption-salt haloes (sampling density of 250x250 m, 250x50 m, 250x20 m) using the superfine fraction analysis method (SFAM) because of complicated landscape conditions (thick Quaternary sediments, widespread permafrost). The method consists in the extraction of superfine fraction (<10 microns) from unconsolidated sediment samples followed by transfer to a solution of sorption-salt forms of elements and analysis using quantitative methods. The method worked well in areal geochemical studies of various scales in the Karelian-Kola region and in the Far East. Main results of the work in the Shamanikha-Stolbovsky area: 1. Geochemical exploration using the hyperfine fractions analysis method with sampling density of 250x250 m allowed the identification of zonal anomalous geochemical fields (AGCF) classified as an ore deposit promising for the discovery of gold mineralization (Nadezhda, Timsha, and Temny prospects). These AGCF are characterized by following three-zonal structure (from the center to the periphery): nucleus zone - area of centripetal elements concentration (Au, Ag, Sb, As, Cu, Hg, Bi, Pb, Mo); exchange zone - area of centrifugal elements concentration (Mn, Zn, V, Ti, Co, Cr, Ni); flank concentration zone - area of elevated contents of centripetal elements with subbackground centrifugal elements. 2. Detailed AGCF studies with sampling density of 250x50 m (250x20 m) in the Nadezhda, Timsha, and Temny prospects made it possible to refine the composition and structure of anomalous geochemical fields, identify potential gold zones, and determine their formation affinity. Nadezhda Site. Contrast Au, Ag, Pb, Bi, Sb, As dispersion halos that form a linear anomalous geochemical field of ore body rank are identified. Predicted mineralization was related to the gold-sulfosalt mineral association according to the secondary dispersion halos chemical composition. Timsha Site. Contrast secondary Au, Ag, Sb, As, Hg, Pb, Bi dispersion halos are identified. These halos have rhythmically-banded structure, which can be caused by stringer morphological type of mineralization. Bands with anomalously high contents of elements have been interpreted by the authors as probable auriferous bodies. Four such bodies of 700 to 1500 m long were identified. Mineralization of the gold-sulfide formation similar to the "Carlin" type is predicted according to the secondary dispersion halos chemical composition as well as geological features. Temny Site. Contrast secondary Au, Ag, W, Sb dispersion halos are identified. A series of geochemical associations was identified based on factor analysis results. Au-Bi-W-Hg, and Pb-Sb-Ag-Zn associations, apparently related to the mineralization are of the greatest interest. Geochemical fields of these associations are closely spaced and overlapped in plan that may be caused by axial zoning of the subvertically dipping auriferous body. Three linear geochemical zones corresponding to potentially auriferous zones with pyrite type mineralization of the gold-quartz formation are identified within the anomalous geochemical field core zone. 3. In all these prospects, mining and drilling penetrated gold ore bodies within the identified potentially gold zones. The Nadezhda target now has the status of gold deposit.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Utility of correlation techniques in gravity and magnetic interpretation
NASA Technical Reports Server (NTRS)
Chandler, V. W.; Koski, J. S.; Braile, L. W.; Hinze, W. J.
1977-01-01
Two methods of quantitative combined analysis, internal correspondence and clustering, are presented. Model studies are used to illustrate implementation and interpretation procedures of these methods, particularly internal correspondence. Analysis of the results of applying these methods to data from the midcontinent and a transcontinental profile show they can be useful in identifying crustal provinces, providing information on horizontal and vertical variations of physical properties over province size zones, validating long wave-length anomalies, and isolating geomagnetic field removal problems. Thus, these techniques are useful in considering regional data acquired by satellites.
NASA Astrophysics Data System (ADS)
Wang, Yan-Jun; Liu, Qun
1999-03-01
Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.
Reliability of resting-state microstate features in electroencephalography.
Khanna, Arjun; Pascual-Leone, Alvaro; Farzan, Faranak
2014-01-01
Electroencephalographic (EEG) microstate analysis is a method of identifying quasi-stable functional brain states ("microstates") that are altered in a number of neuropsychiatric disorders, suggesting their potential use as biomarkers of neurophysiological health and disease. However, use of EEG microstates as neurophysiological biomarkers requires assessment of the test-retest reliability of microstate analysis. We analyzed resting-state, eyes-closed, 30-channel EEG from 10 healthy subjects over 3 sessions spaced approximately 48 hours apart. We identified four microstate classes and calculated the average duration, frequency, and coverage fraction of these microstates. Using Cronbach's α and the standard error of measurement (SEM) as indicators of reliability, we examined: (1) the test-retest reliability of microstate features using a variety of different approaches; (2) the consistency between TAAHC and k-means clustering algorithms; and (3) whether microstate analysis can be reliably conducted with 19 and 8 electrodes. The approach of identifying a single set of "global" microstate maps showed the highest reliability (mean Cronbach's α > 0.8, SEM ≈ 10% of mean values) compared to microstates derived by each session or each recording. There was notably low reliability in features calculated from maps extracted individually for each recording, suggesting that the analysis is most reliable when maps are held constant. Features were highly consistent across clustering methods (Cronbach's α > 0.9). All features had high test-retest reliability with 19 and 8 electrodes. High test-retest reliability and cross-method consistency of microstate features suggests their potential as biomarkers for assessment of the brain's neurophysiological health.
2015-01-01
Background Transgenerational epigenetics (TGE) are currently considered important in disease, but the mechanisms involved are not yet fully understood. TGE abnormalities expected to cause disease are likely to be initiated during development and to be mediated by aberrant gene expression associated with aberrant promoter methylation that is heritable between generations. However, because methylation is removed and then re-established during development, it is not easy to identify promoter methylation abnormalities by comparing normal lineages with those expected to exhibit TGE abnormalities. Methods This study applied the recently proposed principal component analysis (PCA)-based unsupervised feature extraction to previously reported and publically available gene expression/promoter methylation profiles of rat primordial germ cells, between E13 and E16 of the F3 generation vinclozolin lineage that are expected to exhibit TGE abnormalities, to identify multiple genes that exhibited aberrant gene expression/promoter methylation during development. Results The biological feasibility of the identified genes were tested via enrichment analyses of various biological concepts including pathway analysis, gene ontology terms and protein-protein interactions. All validations suggested superiority of the proposed method over three conventional and popular supervised methods that employed t test, limma and significance analysis of microarrays, respectively. The identified genes were globally related to tumors, the prostate, kidney, testis and the immune system and were previously reported to be related to various diseases caused by TGE. Conclusions Among the genes reported by PCA-based unsupervised feature extraction, we propose that chemokine signaling pathways and leucine rich repeat proteins are key factors that initiate transgenerational epigenetic-mediated diseases, because multiple genes included in these two categories were identified in this study. PMID:26677731
Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta
2016-08-01
Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.
Industrial Instrument Mechanic. Occupational Analyses Series.
ERIC Educational Resources Information Center
Dean, Ann; Zagorac, Mike; Bumbaka, Nick
This analysis covers tasks performed by an industrial instrument mechanic, an occupational title some provinces and territories of Canada have also identified as industrial instrumentation and instrument mechanic. A guide to analysis discusses development, structure, and validation method; scope of the occupation; trends; and safety. To facilitate…
Fine-scale genotyping methods are necessary in order to identify possible sources of human exposure to opportunistic pathogens belonging to the Mycobacterium avium complex (MAC). In this study, amplified fragment length polymorphism (AFLP) analysis was evaluated for fingerprintin...
Farm Equipment Mechanic. Occupational Analyses Series.
ERIC Educational Resources Information Center
Ross, Douglas
This analysis covers tasks performed by a farm equipment mechanic, an occupational title some provinces and territories of Canada have also identified as agricultural machinery technician, agricultural mechanic, and farm equipment service technician. A guide to analysis discusses development, structure, and validation method; scope of the…
Recreation Vehicle Mechanic. Occupational Analyses Series.
ERIC Educational Resources Information Center
Dean, Ann; Embree, Rick
This analysis covers tasks performed by a recreation vehicle mechanic, an occupational title some provinces and territories of Canada have also identified as recreation vehicle technician and recreation vehicle service technician. A guide to analysis discusses development, structure, and validation method; scope of the occupation; trends; and…
Superintendent Leadership Style: A Gendered Discourse Analysis
ERIC Educational Resources Information Center
Wallin, Dawn C.; Crippen, Carolyn
2007-01-01
Using a blend of social constructionism, critical feminism, and dialogue theory, the discourse of nine Manitoba superintendents is examined to determine if it illustrates particular gendered assumptions regarding superintendents' leadership style. Qualitative inquiry and analysis methods were utilized to identify emerging themes, or topics of…
Ontology based molecular signatures for immune cell types via gene expression analysis
2013-01-01
Background New technologies are focusing on characterizing cell types to better understand their heterogeneity. With large volumes of cellular data being generated, innovative methods are needed to structure the resulting data analyses. Here, we describe an ‘Ontologically BAsed Molecular Signature’ (OBAMS) method that identifies novel cellular biomarkers and infers biological functions as characteristics of particular cell types. This method finds molecular signatures for immune cell types based on mapping biological samples to the Cell Ontology (CL) and navigating the space of all possible pairwise comparisons between cell types to find genes whose expression is core to a particular cell type’s identity. Results We illustrate this ontological approach by evaluating expression data available from the Immunological Genome project (IGP) to identify unique biomarkers of mature B cell subtypes. We find that using OBAMS, candidate biomarkers can be identified at every strata of cellular identity from broad classifications to very granular. Furthermore, we show that Gene Ontology can be used to cluster cell types by shared biological processes in order to find candidate genes responsible for somatic hypermutation in germinal center B cells. Moreover, through in silico experiments based on this approach, we have identified genes sets that represent genes overexpressed in germinal center B cells and identify genes uniquely expressed in these B cells compared to other B cell types. Conclusions This work demonstrates the utility of incorporating structured ontological knowledge into biological data analysis – providing a new method for defining novel biomarkers and providing an opportunity for new biological insights. PMID:24004649
Sources and concentrations of aldehydes and ketones in indoor environments in the UK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crump, D.R.; Gardiner, D.
1989-01-01
Individual aldehydes and ketones can be separated, identified and quantitatively estimated by trapping the 2,4-dinitrophenylhydrazine (DNPH) derivatives and analysis by HPLC. Appropriate methods and detection limits are reported. Many sources of formaldehyde have been identified by this means and some are found to emit other aldehydes and ketones. The application of this method to determine the concentration of these compounds in the atmospheres of buildings is described and the results compared with those obtained using chromotropic acid or MBTH.
Platek, S Frank; Keisler, Mark A; Ranieri, Nicola; Reynolds, Todd W; Crowe, John B
2002-09-01
The ability to accurately determine the number of syringe needle penetration holes through the rubber stoppers in pharmaceutical vials and rubber septa in intravenous (i.v.) line and bag ports has been a critical factor in a number of forensic cases involving the thefts of controlled substances or suspected homicide by lethal injection. In the early 1990s, the microscopy and microanalysis group of the U.S. Food and Drug Administration's Forensic Chemistry Center (FCC) developed and implemented a method (unpublished) to locate needle punctures in rubber pharmaceutical vial stoppers. In 1996, as part of a multiple homicide investigation, the Indiana State Police Laboratory (ISPL) contacted the FCC for information on a method to identify and count syringe needle punctures through rubber stoppers in pharmaceutical vials. In a joint project and investigation using the FCC's needle hole location method and applying a method of puncture site mapping developed by the ISPL, a systematic method was developed to locate, identify, count, and map syringe punctures in rubber bottle stoppers or i.v. bag ports using microscopic analysis. The method requires documentation of punctures on both sides of the rubber stoppers and microscopic analysis of each suspect puncture site. The final result of an analysis using the method is a detailed diagram of puncture holes on both sides of a questioned stopper and a record of the minimum number of puncture holes through a stopper.
Kao, Chi H.J.; Bishop, Karen S.; Xu, Yuanye; Han, Dug Yeo; Murray, Pamela M.; Marlow, Gareth J.; Ferguson, Lynnette R.
2016-01-01
Ganoderma lucidum (lingzhi) has been used for the general promotion of health in Asia for many centuries. The common method of consumption is to boil lingzhi in water and then drink the liquid. In this study, we examined the potential anticancer activities of G. lucidum submerged in two commonly consumed forms of alcohol in East Asia: malt whiskey and rice wine. The anticancer effect of G. lucidum, using whiskey and rice wine-based extraction methods, has not been previously reported. The growth inhibition of G. lucidum whiskey and rice wine extracts on the prostate cancer cell lines, PC3 and DU145, was determined. Using Affymetrix gene expression assays, several biologically active pathways associated with the anticancer activities of G. lucidum extracts were identified. Using gene expression analysis (real-time polymerase chain reaction [RT-PCR]) and protein analysis (Western blotting), we confirmed the expression of key genes and their associated proteins that were initially identified with Affymetrix gene expression analysis. PMID:27006591
Han, Shuting; Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael
2018-03-28
Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra , extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. © 2018, Han et al.
Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor
2013-01-01
Unhealthy behaviors increase individual health risks and are a socioeconomic burden. Harnessing social influence is perceived as fundamental for interventions to influence health-related behaviors. However, the mechanisms through which social influence occurs are poorly understood. Online social networks provide the opportunity to understand these mechanisms as they digitally archive communication between members. In this paper, we present a methodology for content-based social network analysis, combining qualitative coding, automated text analysis, and formal network analysis such that network structure is determined by the content of messages exchanged between members. We apply this approach to characterize the communication between members of QuitNet, an online social network for smoking cessation. Results indicate that the method identifies meaningful theme-based social sub-networks. Modeling social network data using this method can provide us with theme-specific insights such as the identities of opinion leaders and sub-community clusters. Implications for design of targeted social interventions are discussed.
Unger, E R; Lin, J-M S; Tian, H; Gurbaxani, B M; Boneva, R S; Jones, J F
2016-01-01
Multiple case definitions are in use to identify chronic fatigue syndrome (CFS). Even when using the same definition, methods used to apply definitional criteria may affect results. The Centers for Disease Control and Prevention (CDC) conducted two population-based studies estimating CFS prevalence using the 1994 case definition; one relied on direct questions for criteria of fatigue, functional impairment and symptoms (1997 Wichita; Method 1), and the other used subscale score thresholds of standardized questionnaires for criteria (2004 Georgia; Method 2). Compared to previous reports the 2004 CFS prevalence estimate was higher, raising questions about whether changes in the method of operationalizing affected this and illness characteristics. The follow-up of the Georgia cohort allowed direct comparison of both methods of applying the 1994 case definition. Of 1961 participants (53 % of eligible) who completed the detailed telephone interview, 919 (47 %) were eligible for and 751 (81 %) underwent clinical evaluation including medical/psychiatric evaluations. Data from the 499 individuals with complete data and without exclusionary conditions was available for this analysis. A total of 86 participants were classified as CFS by one or both methods; 44 cases identified by both methods, 15 only identified by Method 1, and 27 only identified by Method 2 (Kappa 0.63; 95 % confidence interval [CI]: 0.53, 0.73 and concordance 91.59 %). The CFS group identified by both methods were more fatigued, had worse functioning, and more symptoms than those identified by only one method. Moderate to severe depression was noted in only one individual who was classified as CFS by both methods. When comparing the CFS groups identified by only one method, those only identified by Method 2 were either similar to or more severely affected in fatigue, function, and symptoms than those only identified by Method 1. The two methods demonstrated substantial concordance. While Method 2 classified more participants as CFS, there was no indication that they were less severely ill or more depressed. The classification differences do not fully explain the prevalence increase noted in the 2004 Georgia study. Use of standardized instruments for the major CFS domains provides advantages for disease stratification and comparing CFS patients to other illnesses.
Cao, Hongbao; Duan, Junbo; Lin, Dongdong; Shugart, Yin Yao; Calhoun, Vince; Wang, Yu-Ping
2014-11-15
Integrative analysis of multiple data types can take advantage of their complementary information and therefore may provide higher power to identify potential biomarkers that would be missed using individual data analysis. Due to different natures of diverse data modality, data integration is challenging. Here we address the data integration problem by developing a generalized sparse model (GSM) using weighting factors to integrate multi-modality data for biomarker selection. As an example, we applied the GSM model to a joint analysis of two types of schizophrenia data sets: 759,075 SNPs and 153,594 functional magnetic resonance imaging (fMRI) voxels in 208 subjects (92 cases/116 controls). To solve this small-sample-large-variable problem, we developed a novel sparse representation based variable selection (SRVS) algorithm, with the primary aim to identify biomarkers associated with schizophrenia. To validate the effectiveness of the selected variables, we performed multivariate classification followed by a ten-fold cross validation. We compared our proposed SRVS algorithm with an earlier sparse model based variable selection algorithm for integrated analysis. In addition, we compared with the traditional statistics method for uni-variant data analysis (Chi-squared test for SNP data and ANOVA for fMRI data). Results showed that our proposed SRVS method can identify novel biomarkers that show stronger capability in distinguishing schizophrenia patients from healthy controls. Moreover, better classification ratios were achieved using biomarkers from both types of data, suggesting the importance of integrative analysis. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Li, Z. K.
1985-01-01
A specialized program was developed for flow cytometric list-mode data using an heirarchical tree method for identifying and enumerating individual subpopulations, the method of principal components for a two-dimensional display of 6-parameter data array, and a standard sorting algorithm for characterizing subpopulations. The program was tested against a published data set subjected to cluster analysis and experimental data sets from controlled flow cytometry experiments using a Coulter Electronics EPICS V Cell Sorter. A version of the program in compiled BASIC is usable on a 16-bit microcomputer with the MS-DOS operating system. It is specialized for 6 parameters and up to 20,000 cells. Its two-dimensional display of Euclidean distances reveals clusters clearly, as does its 1-dimensional display. The identified subpopulations can, in suitable experiments, be related to functional subpopulations of cells.
NASA Astrophysics Data System (ADS)
Tsai, Christina; Yeh, Ting-Gu
2017-04-01
Extreme weather events are occurring more frequently as a result of climate change. Recently dengue fever has become a serious issue in southern Taiwan. It may have characteristic temporal scales that can be identified. Some researchers have hypothesized that dengue fever incidences are related to climate change. This study applies time-frequency analysis to time series data concerning dengue fever and hydrologic and meteorological variables. Results of three time-frequency analytical methods - the Hilbert Huang transform (HHT), the Wavelet Transform (WT) and the Short Time Fourier Transform (STFT) are compared and discussed. A more effective time-frequency analysis method will be identified to analyze relevant time series data. The most influential time scales of hydrologic and meteorological variables that are associated with dengue fever are determined. Finally, the linkage between hydrologic/meteorological factors and dengue fever incidences can be established.